The Glass Cage: Automation and Us (17 page)

To put it into uncharitable but not inaccurate terms, many doctors may soon find themselves taking on the role of human sensors who collect information for a decision-making computer. The doctors will examine the patient and enter data into electronic forms, but the computer will take the lead in suggesting diagnoses and recommending therapies. Thanks to the steady escalation of computer automation through Bright’s hierarchy, physicians seem destined to experience, at least in some areas of their practice, the same deskilling effect that was once restricted to factory hands.

They will not be alone. The incursion of computers into elite professional work is happening everywhere. We’ve already seen how the thinking of corporate auditors is being shaped by expert systems that make predictions about risks and other variables. Other financial professionals, from loan officers to investment managers, also depend on computer models to guide their decisions, and Wall Street is now largely under the control of correlation-sniffing computers and the quants who program them. The number of people employed as securities dealers and traders in New York City plummeted by a third, from 150,000 to 100,000, between 2000 and 2013, despite the fact that Wall Street firms were often posting record profits. The overriding goal of brokerage and investment banking firms is “automating the system and getting rid of the traders,” one financial industry analyst explained to a Bloomberg reporter. As for the traders who remain, “all they do today is hit buttons on computer screens.”
39

That’s true not only in the trading of simple stocks and bonds but also in the packaging and dealing of complex financial instruments. Ashwin Parameswaran, a technology analyst and former investment banker, notes that “banks have made a significant effort to reduce the amount of skill and know-how required to price and trade financial derivatives. Trading systems have been progressively modified so that as much knowledge as possible is embedded within the software.”
40
Predictive algorithms are even moving into the lofty realm of venture capitalism, where top investors have long prided themselves on having a good nose for business and innovation. Prominent venture-capital firms like the Ironstone Group and Google Ventures now use computers to sniff out patterns in records of entrepreneurial success, and they place their bets accordingly.

A similar trend is under way in the law. For years, attorneys have depended on computers to search legal databases and prepare documents. Recently, software has taken a more central role in law offices. The critical process of document discovery, in which, traditionally, junior lawyers and paralegals read through reams of correspondence, email messages, and notes in search of evidence, has been largely automated. Computers can parse thousands of pages of digitized documents in seconds. Using e-discovery software with language-analysis algorithms, the machines not only spot relevant words and phrases but also discern chains of events, relationships among people, and even personal emotions and motivations. A single computer can take over the work of dozens of well-paid professionals. Document-preparation software has also advanced. By filling out a simple checklist, a lawyer can assemble a complex contract in an hour or two—a job that once took days.

On the horizon are bigger changes. Legal software firms are beginning to develop statistical prediction algorithms that, by analyzing many thousands of past cases, can recommend trial strategies, such as the choice of a venue or the terms of a settlement offer, that carry high probabilities of success. Software will soon be able to make the kinds of judgments that up to now required the experience and insight of a senior litigator.
41
Lex Machina, a company started in 2010 by a group of Stanford law professors and computer scientists, offers a preview of what’s coming. With a database covering some 150,000 intellectual property cases, it runs computer analyses that predict the outcomes of patent lawsuits under various scenarios, taking into account the court, the presiding judge and participating attorneys, the litigants, the outcomes of related cases, and other factors.

Predictive algorithms are also assuming more control over the decisions made by business executives. Companies are spending billions of dollars a year on “people analytics” software that automates decisions about hiring, pay, and promotion. Xerox now relies exclusively on computers to choose among applicants for its fifty thousand call-center jobs. Candidates sit at a computer for a half-hour personality test, and the hiring software immediately gives them a score reflecting the likelihood that they’ll perform well, show up for work reliably, and stick with the job. The company extends offers to those with high scores and sends low scorers on their way.
42
UPS uses predictive algorithms to chart daily routes for its drivers. Retailers use them to determine the optimal arrangement of merchandise on store shelves. Marketers and ad agencies use them in deciding where and when to run advertisements and in generating promotional messages on social networks. Managers increasingly find themselves playing a subservient role to software. They review and rubber-stamp plans and decisions produced by computers.

There’s an irony here. In shifting the center of the economy from physical goods to data flows, computers brought new status and wealth to information workers during the last decades of the twentieth century. People who made their living by manipulating signs and symbols on screens became the stars of the new economy, even as the factory jobs that had long buttressed the middle class were being transferred overseas or handed off to robots. The dot-com bubble of the late 1990s, when for a few euphoric years riches flooded out of computer networks and into personal brokerage accounts, seemed to herald the start of a golden age of unlimited economic opportunity—what technology boosters dubbed a “long boom.” But the good times proved fleeting. Now we’re seeing that, as Norbert Wiener predicted, automation doesn’t play favorites. Computers are as good at analyzing symbols and otherwise parsing and managing information as they are at directing the moves of industrial robots. Even the people who operate complex computer systems are losing their jobs to software, as data centers, like factories, become increasingly automated. The vast server farms operated by companies like Google, Amazon, and Apple essentially run themselves. Thanks to virtualization, an engineering technique that uses software to replicate the functions of hardware components like servers, the facilities’ operations can be monitored and controlled by algorithms. Network problems and application glitches can be detected and fixed automatically, often in a matter of seconds. It may turn out that the late twentieth century’s “intellectualization of labor,” as the Italian media scholar Franco Berardi has termed it,
43
was just a precursor to the early twenty-first century’s automation of intellect.

It’s always risky to speculate how far computers will go in mimicking the insights and judgments of people. Extrapolations based on recent computing trends have a way of turning into fantasies. But even if we assume, contrary to the extravagant promises of big-data evangelists, that there are limits to the applicability and usefulness of correlation-based predictions and other forms of statistical analysis, it seems clear that computers are a long way from bumping up against those limits. When, in early 2011, the IBM supercomputer Watson took the crown as the reigning champion of
Jeopardy!
, thrashing two of the quiz show’s top players, we got a preview of where computers’ analytical talents are heading. Watson’s ability to decipher clues was astonishing, but by the standards of contemporary artificial-intelligence programming, the computer was not performing an exceptional feat. It was, essentially, searching a vast database of documents for potential answers and then, by working simultaneously through a variety of prediction routines, determining which answer had the highest probability of being the correct one. But it was performing that feat so quickly that it was able to outthink exceptionally smart people in a tricky test involving trivia, wordplay, and recall.

Watson represents the flowering of a new, pragmatic form of artificial intelligence. Back in the 1950s and 1960s, when digital computers were still new, many mathematicians and engineers, and quite a few psychologists and philosophers, came to believe that the human brain had to operate like some sort of digital calculating machine. They saw in the computer a metaphor and a model for the mind. Creating artificial intelligence, it followed, would be fairly straightforward: you’d figure out the algorithms that run inside our skulls and then you’d translate those programs into software code. It didn’t work. The original artificial-intelligence strategy failed miserably. Whatever it is that goes on inside our brains, it turned out, can’t be reduced to the computations that go on inside computers.
*
Today’s computer scientists are taking a very different approach to artificial intelligence that’s at once less ambitious and more effective. The goal is no longer to replicate the
process
of human thought—that’s still beyond our ken—but rather to replicate its
results
. These scientists look at a particular product of the mind—a hiring decision, say, or an answer to a trivia question—and then program a computer to accomplish the same result in its own mindless way. The workings of Watson’s circuits bear little resemblance to the workings of the mind of a person playing
Jeopardy!
, but Watson can still post a higher score.

In the 1930s, while working on his doctoral thesis, the British mathematician and computing pioneer Alan Turing came up with the idea of an “oracle machine.” It was a kind of computer that, applying a set of explicit rules to a store of data through “some unspecified means,” could answer questions that normally would require tacit human knowledge. Turing was curious to figure out “how far it is possible to eliminate intuition, and leave only ingenuity.” For the purposes of his thought experiment, he posited that there would be no limit to the machine’s number-crunching acumen, no upper bound to the speed of its calculations or the amount of data it could take into account. “We do not mind how much ingenuity is required,” he wrote, “and therefore assume it to be available in unlimited supply.”
44
Turing was, as usual, prescient. He understood, as few others did at the time, the latent intelligence of algorithms, and he foresaw how that intelligence would be released by speedy calculations. Computers and databases will always have limits, but in systems like Watson we see the arrival of operational oracle machines. What Turing could only imagine, engineers are now building. Ingenuity is replacing intuition.

Watson’s data-analysis acumen is being put to practical use as a diagnostic aid for oncologists and other doctors, and IBM foresees further applications in such fields as law, finance, and education. Spy agencies like the CIA and the NSA are also reported to be testing the system. If Google’s driverless car reveals the newfound power of computers to replicate our psychomotor skills, to match or exceed our ability to navigate the physical world, Watson demonstrates computers’ newfound power to replicate our cognitive skills, to match or exceed our ability to navigate the world of symbols and ideas.

B
UT THE
replication of the outputs of thinking is not thinking. As Turing himself stressed, algorithms will never replace intuition entirely. There will always be a place for “spontaneous judgments which are not the result of conscious trains of reasoning.”
45
What really makes us smart is not our ability to pull facts from documents or decipher statistical patterns in arrays of data. It’s our ability to make sense of things, to weave the knowledge we draw from observation and experience, from
living
, into a rich and fluid understanding of the world that we can then apply to any task or challenge. It’s this supple quality of mind, spanning conscious and unconscious cognition, reason and inspiration, that allows human beings to think conceptually, critically, metaphorically, speculatively, wittily—to take leaps of logic and imagination.

Hector Levesque, a computer scientist and roboticist at the University of Toronto, provides an example of a simple question that people can answer in a snap but that baffles computers:

The large ball crashed right through the table because it was made of Styrofoam.

What was made of Styrofoam, the large ball or the table?

We come up with the answer effortlessly because we understand what Styrofoam is and what happens when you drop something on a table and what tables tend to be like and what the adjective
large
implies. We grasp the context, both of the situation and of the words used to describe it. A computer, lacking any true understanding of the world, finds the language of the question hopelessly ambiguous. It remains locked in its algorithms. Reducing intelligence to the statistical analysis of large data sets “can lead us,” says Levesque, “to systems with very impressive performance that are nonetheless
idiot-savants
.” They might be great at chess or
Jeopardy!
or facial recognition or other tightly circumscribed mental exercises, but they “are completely hopeless outside their area of expertise.”
46
Their precision is remarkable, but it’s often a symptom of the narrowness of their perception.

Other books

Fan by Danny Rhodes
32 - The Barking Ghost by R.L. Stine - (ebook by Undead)
Fenella J. Miller by A Debt of Honour
MisStaked by J. Morgan
Camp 30 by Eric Walters
Hot to the Touch by Isabel Sharpe
The Vow by Jody Hedlund
Sworn Virgin by Elvira Dones
The Whispering Swarm by Michael Moorcock


readsbookonline.com Copyright 2016 - 2024