Read Final Jeopardy Online

Authors: Stephen Baker

Final Jeopardy (19 page)

As in many fields of science, researchers in Artificial Intelligence have long fallen into two groups, pragmatists and visionaries. And most of the visionaries, including Tenenbaum, argue that machines like Watson merely simulate intelligence by racing through billions of correlations. Watson and its kin don't really “know” or “understand” anything. Watson can ace
Jeopardy
clues on Shakespeare, but only because the ones and zeros that spell out “Shakespeare” pop up on lists and documents near other strings of ones and zeros representing playwrights, England, Hamlet, Elizabethan, and so on. It lacks anything resembling awareness. Most reject the suggestion that the clusters of data nestled among its transistors mirror the memories encoded chemically in the human brain or that Watson's search for
Jeopardy
answers, and its statistical methods of balancing one candidate answer with another, mimic what goes on in Ken Jennings's head.

The parallels, Tenenbaum said, are deceiving. Watson, for example, appears to learn. But its learning comes from adjusting its judgments to feedback, moving toward the combinations that produce correct answers and away from errors. These “error-driven learning algorithms,” he said, are derived from experiments in behavioral psychology. “The animals do something, and they're rewarded or they're punished,” he said. That kind of learning may be crucial to survival, leading humans and many animals alike to recoil from flames, coiled snakes, and bitter, potentially poisonous, berries. But this describes a primitive level of brain function. What's more, Watson's learning laboratory was limited, extending only to its 75 gigabytes of data and the instructions of its algorithms. Outside that universe, Tenenbaum stressed, Watson knew nothing. And it formed no theories.

Ferrucci didn't disagree. Watson had its limitations. One time, when Ferrucci learned that another scientist had disparaged Watson as an “idiot savant,” he said, “Idiot savant? I'll take it!” While he objected to that term, which he viewed as demeaning, Ferrucci said he only wished that Watson could approach the question-answering mastery of humans like Kim Peek, the so-called megasavant played by Dustin Hoffman in the movie
Rainman
. Peek, who died in 2009, was a walking encyclopedia. He had read voluminously and seemed to recall every detail with precision. Yet he had grave physical and developmental shortcomings. His brain was missing the corpus callosum, the bundle of nerves connecting the two hemispheres. He had little meaningful interaction with people—with the exception of his father—and he did not appear to draw sophisticated conclusions from his facts, much less come up with theories. He was a stunted genius. But unlike Watson, he was entirely fluent in language. As far as Ferrucci was concerned, a Q-A machine with the language proficiency of a human was a dream. It would have boundless market potential. He would leave it to other kinds of machines to come up with theories.

The question was whether computers like Watson, products of this pragmatic, problem-solving (and profit-seeking) side of the AI world, were on a path toward higher intelligence. Within a decade, computers would likely run five hundred times as fast and would race through databases a thousand times as large. Within fifteen years, studies predicted that a single supercomputer would be able to carry out 10
20
calculations per second. This was enough computing power to count every grain of sand on earth in a single second (assuming it didn't have more interesting work to do). At the same time, the algorithms running such machines, each one resulting from decades of rigorous Darwinian sifting, would be smarter and more precise. Would these supercharged descendants of Watson still be in the business of simulating intelligence? Or could they make the leap to a human level, then advance beyond?

The AI community was full of doubters. And their concerns about the limitations of statistical crunchers like Watson stirred plenty of debate within the scientific community. Going back decades, the sparkling vision of AI was to develop machines that could think, know, and learn. Watson, many argued, landed its star spot on national television without accomplishing any of those goals. A human answering a
Jeopardy
question draws on “layers and layers of knowledge,” said MIT's Sajit Rao. “There's so much knowledge around every single word.” Watson couldn't compare. “If you ask Watson what time it is,” wrote one computer scientist in an e-mail, “it won't have an answer.”

If Watson hadn't been so big, few would have cared. But the size and scope of the project, and the razzmatazz surrounding it, fueled resentment. Big Blue was a leading force in AI, and its focus on
Jeopardy
funneled more research dollars toward its statistical approach. What's more, Watson was sure to hog the press. The publicity leading up to the man-machine
Jeopardy
showdown would likely shine a brighter spotlight on AI than anything since the 1997 chess matches between Garry Kasparov and Deep Blue. Yet the world would see, and perhaps fall in love with, a machine that only
simulated
intelligence. In many aspects, it was dumb. And despite its mastery of statistics, it knew nothing. Worse, if Watson—despite these drawbacks—proved to be an effective and versatile knowledge machine, it might spell the end of competing technologies, turning years of research—entire careers—into dead ends. The final irony: At least a few scientists kept their criticism of Watson private for fear of alienating Big Blue, a potential sponsor of their research.

In sum, from a skeptic's view, the machine was too dumb, too ignorant, too famous, and too rich. (In that sense, IBM's computer resembled lots of other television stars. And, interestingly enough, the resentment within the field mirrored the combination of envy and contempt that serious actors feel for the celebrities on reality TV.)

These shortcomings aside, Watson had one quality that few could ignore. In the broad realm of
Jeopardy
, it worked. It made sense of most of the clues, even those in complex English, and it came up with answers within a few seconds. The question was whether other lines of research in AI would surpass it—or perhaps one day endow a machine with the human smarts or expertise that it lacked.

Dividing the pragmatists like Ferrucci and the idealists within AI was the human brain. For many, including Tenenbaum, the path toward true machine intelligence had less to do with the power of the computer than the nature of its instructions and architecture. Only the brain, they believed, held the keys to higher levels of thinking—to concepts, ideas, and theories. But they were tangled up in the most complex circuitry known in the universe.

Tenenbaum compared the effort required to build theorizing and idea-spouting machines with the American push, a half century earlier, to send a manned voyage to the moon. The moon shot, he said, was far easier. When President Kennedy issued his call for a lunar mission in May 1961, most of the basic scientific research had already been accomplished. Indeed, the march toward space travel had begun early in the seventeenth century, when Galileo started to write down the mathematical equations describing how certain objects moved. This advanced through the Scientific and Industrial Revolutions, from the physics of Newton to the harnessing of electricity, the development of chemical bonds and powerful fuels, the creation of metal alloys, and, finally, advances in rocket technology. By the 1960s, the basic science behind sending a spaceship to the moon was largely complete. Much of the technology existed. It was up to the engineers to assemble the pieces, build them to the proper scale, and send the finished spacecraft skyward.

“If you want to compare [AI] to the space program, we're at Galileo,” Tenenbaum said. “We're not yet at Newton.” He is convinced that while ongoing research into the brain is shining a light on intelligence, the larger goal—to reverse engineer human thought—will require immense effort and time. An enormous amount of science awaits before the engineering phase can begin. “The problem is exponentially harder [than manned space flight],” he said. “I wouldn't be surprised if it took a couple hundred years.”

No wonder, you might say, that IBM opted for a more rapid approach. Yet even as Tenenbaum and others engage in quasi-theological debates about the future of intelligence, many are already snatching ideas from the brain to build what they can in the here-and-now. Tenenbaum's own lab, using statistical formulas inspired by brain functions, is training computers to sort through scarce data and make predictions about everything from the location of oil deposits to suicide bombing attacks. For this, he hopes to infuse the machines with a thread or two of logic inspired by observations of the brain, helping them to connect dots the way people do. At the same time, legions of theorists, focused on the exponential advances in computer technology, are predicting that truly smart machines, also inspired by the brain, will be arriving far ahead of Tenenbaum's timetable. They postulate that within decades, computers more intelligent than humans will dramatically alter the course of human evolution.

Meanwhile, other scientists in the field pursue a different type of question-answering system—a machine that actually knows things. For two generations, an entire community in AI has tried to teach computers about the world, describing the links between oxygen and hydrogen, Indiana and Ohio, tables and chairs. The goal is to build knowledge engines, machines very much like Watson but capable of much deeper reasoning. They have to know things and understand certain relationships to come up with insights. Could the emergence of a data-crunching wonder like Watson short-circuit their research? Or could their work help Watson grow from a dilettante into a scholar?

In the first years of the twenty-first century, Paul Allen, the cofounder of Microsoft, was pondering Aristotle. For several decades in the fourth century
BC
, that single Greek philosopher was believed to hold most of the world's scientific knowledge in his head. Aristotle was like the Internet and Google combined. He stored the knowledge and located it. In a sense, he outperformed the Internet because he combined his factual knowledge with a mastery of language and context. He could answer questions fluently, and he was reputedly a wonderful teacher.

This isn't to say that as an information system, Aristotle had no shortcomings. First, the universe of scientific knowledge in his day was tiny. (Otherwise it wouldn't have fit into one head, no matter how brilliant.) What's more, the bandwidth in and out of his prodigious mind was severely limited. Only a small group of philosophers and students (including the future Alexander the Great) enjoyed access to it, and then only during certain hours of the day, when the philosopher turned his attention to them. He did have to study, after all. Maintaining omniscience—or even a semblance of it—required hard work.

For perhaps the first time since the philosopher's death, as Allen saw it, a single information system—the Internet —could host the universe of scientific knowledge, or at least a big chunk of it. But how could people gain access to this treasure, learn from it, winnow the truth from fiction and innuendo? How could computers teach us? The solution, it seemed to him, was to create a question-answering system for science, a digital Aristotle.

For years, Allen had been plowing millions into research on computing and the human brain. In 2003, he directed his technology incubator, Vulcan Inc., of Seattle, to sponsor long-range research to develop a digital Aristotle. The Vulcan team called it Project Halo. This scientific expert, they hoped, would fill a number of roles, from education to research. It would answer questions for students, maybe even develop a new type of interactive textbook. And it would serve as an extravagantly well-read research assistant in laboratories.

For Halo to succeed in these roles, it needed to do more than simply find things. It had to weave concepts together. This meant understanding, for example, that when water reaches 100 degrees centigrade it turns into steam and behaves very differently. Plenty of computers could impart that information. But how many could incorporate such knowledge into their analysis and reason from it? The idea of Halo was to build a system that, at least by a liberal definition of the word, could think.

The pilot project was to build a computer that could pass the college Advanced Placement tests in chemistry. Chemistry, said Noah Friedland, who ran the project for Vulcan, seemed like the ideal subject for a computer. It was a hard science “without a lot of psychological interpretations.” Facts were facts, or at least closer to them in chemistry than in squishier domains, like economics. And unlike biology, in which tissue scans and genomic research were unveiling new discoveries every month or two, chemistry was fairly settled. Halo would also sidestep the complications that came with natural language. Vulcan let the three competing companies, two American and one German, translate the questions from English into a logic language that their systems could understand. At some point in the future, they hoped, this digital Aristotle would banter back and forth in human languages. But in the four-month pilot, it just had to master the knowledge and logic of high school chemistry.

The three systems passed the test, albeit with middling scores. But if you looked at the process, you'd hardly know that machines were involved. Teaching chemistry to these systems required a massive use of human brainpower. Teams of humans—knowledge engineers—had to break down the fundamentals of chemistry into components that the computers could handle. Since the computer couldn't develop concepts on its own, it had to learn them as exhaustive lists and laws. “We looked at the cost, and we said, ‘Gee, it costs $10,000 per textbook page to formulate this knowledge,'” Friedland said.

It seemed ludicrous. Instead of enlisting machines to help sort through the cascades of new scientific information, the machines were enlisting humans to encode the tiniest fraction of it—and at a frightful cost. The Vulcan team went on to explore ways in which thousands, or even millions, of humans could teach these machines more efficiently. In their vision, entire communities of experts would educate these digital Aristotles, much the way online communities were contributing their knowledge to create Wikipedia. Work has continued through the decade, but the two principles behind the Halo thinking haven't changed: First, smart machines require smart teachers, and only humans are up to the job. Second, to provide valuable answers, these computers have to be fed factual knowledge, laws, formulas, and equations.

Other books

Mad Cow Nightmare by Nancy Means Wright
The Beginning and the End by Naguib Mahfouz
The Dig for Kids: Luke Vol. 1 by Schwenk, Patrick
A Long Finish - 6 by Michael Dibdin
In the Roar by Milly Taiden
Vengeance by Jack Ludlow


readsbookonline.com Copyright 2016 - 2024