Our Final Invention: Artificial Intelligence and the End of the Human Era Hardcover (6 page)

Then Deep Blue would go back to the still untouched board, and begin evaluating another move. It would repeat this process for many possible moves all while scoring each move according to whether it captured a piece, the value of the piece, whether it improved its overall board position, and by how much. Finally, it would play the highest scoring move.

Was Deep Blue thinking?

Maybe. But few would argue it was thinking the way a human thinks. And few experts doubt that it’ll be the same way with AGI. Each researcher trying to achieve AGI has their own approach. Some are purely biological, working to closely mimic the brain. Others are biologically inspired, taking the brain as a cue, but relying more on AI’s hardworking tool kit: theorem provers, search algorithms, learning algorithms, automated reasoning, and more.

We’ll get into some of these, and explore how the human brain actually uses many of the same computation techniques as computers. But the point is, it’s not clear if computers will think as we define it, or if they’ll ever possess anything like intention or consciousness. Therefore, some scholars say, artificial intelligence equivalent to human intelligence is impossible.

Philosopher John Searle created a thought experiment called the Chinese Room Argument that aims to prove this point:

Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols, which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols, which are correct answers to the questions (the output).

The man inside the room answers correctly, so the people outside think he can communicate in Chinese. Yet the man doesn’t understand a word of Chinese. Like the man, Searle concludes, a computer will never really think or understand. At best what researchers will get from efforts to reverse engineer the brain will be a refined mimic. And AGI systems will achieve similarly mechanical results.

Searle’s not alone in believing computers will never think or attain consciousness. But he has many critics, with many different complaints. Some detractors claim he is computerphobic. Taken as a whole, everything in the Chinese room, including the man, come together to create a system that persuasively “understands” Chinese. Seen this way Searle’s argument is circular: no part of the room (computer) understands Chinese, ergo the computer cannot understand Chinese.

And you can just as easily apply Searle’s objection to humans: we don’t have a formal description of what understanding language really is, so how can we claim humans “understand” language? We have only observation to confirm that language is understood. Just like the people outside Searle’s room.

What’s so remarkable about the brain’s processes, even consciousness, anyway? Just because we don’t understand consciousness now doesn’t mean we never will. It’s not magic.

Still, I agree with Searle
and
his critics. Searle is correct in thinking AGI won’t be like us. It will be full of computational techniques whose operation no one fully understands. And computer systems designed to create AGI, called “cognitive architectures,” may be too complex for any one person to grasp anyway. But Searle’s critics are correct in thinking that someday an AGI or ASI
could
think like us, if we make it that far.

I don’t believe we will. I think our Waterloo lies in the foreseeable future, in the AI of tomorrow and the nascent AGI due out in the next decade or two. Our survival, if it is possible, may depend on, among other things, developing AGI with something akin to consciousness and human understanding, even friendliness, built in. That would require, at minimum, understanding intelligent machines in a fine-grained way, so there’d be no surprises.

Let’s go back to one common definition of the Singularity for a moment, what’s called the “technological Singularity.” It refers to the time in history when we humans share the planet with smarter-than-human intelligence. Ray Kurzweil proposes that we’ll merge with the machines, ensuring our survival. Others propose the machines will enhance our lives, but we’ll continue to live as regular old humans, not human-machine cyborgs. Still others, like me, think the future belongs to machines.

The Machine Intelligence Research Institute was formed to ensure that whatever form our heirs take, our values will be preserved.

In his San Francisco high-rise apartment, Vassar told me, “The stakes are the delivery of human value to humanity’s successors. And through them to the universe.”

To MIRI, the first AGI out of the box must be safe, and therefore carry human value to humanity’s successors in whatever form they appear. If the AGI is not safe, neither humans nor what we value will survive. And it’s not just the future of the earth that’s on the block. As Vassar told me, “MIRI’s mission is to cause the technological singularity to happen in the best possible way, to bring about the best possible future for the universe.”

What would a good outcome for the universe look like?

Vassar gazed out the window at the rush-hour traffic that was just starting to stack up on the iron bridge to Oakland. Somewhere beyond the water lay the future. In his mind, superintelligence has already escaped us. It has colonized our solar system, then our galaxy. Now it was reformatting the universe with megascale building projects, and growing into something so unusual it’s hard for us to grasp.

In that future, he told me, the entire universe becomes a computer or mind, as far beyond our ken as spaceships are to flatworms. Kurzweil writes that this is the universe’s destiny. Others agree, but believe that with the reckless development of advanced AI we’ll assure our elimination as well as that of other beings that might be out there. Just as ASI may not hate us or love us, neither will it hate or love other creatures in the universe. Is our quest for AGI the start of a galaxy-wide plague?

As I left Vassar’s apartment I wondered what could prevent this dystopian vision from coming true. What could stop the annihilating kind of AGI? Furthermore, were there holes in the dystopian hypothesis?

Well, builders of AI and AGI could make it “friendly,” so that whatever evolves from the first AGI won’t destroy us and other creatures in the universe. Or, we might be wrong about AGI’s abilities and “drives,” and fearing its conquest of the universe could be a false dilemma.

Maybe AI can never advance to AGI and beyond, or maybe there are good reasons to think it will happen in a different and more manageable way than we currently think possible. In short, I wanted to know what could put us on a safer course to the future.

I intended to ask the AI Box Experiment creator, Eliezer Yudkowsky. Besides originating that thought experiment, I’d been told that he knew more about Friendly AI than anyone else in the world.

 

Chapter Four

The Hard Way

With the possible exception of nanotechnology being released upon the world there is nothing in the whole catalogue of disasters that is comparable to AGI.

—Eliezer Yudkowsky, Research Fellow, Machine Intelligence Research Institute

Fourteen “official” cities comprise Silicon Valley, and twenty-five math-and-engineering-focused universities and extension campuses inhabit them. They feed the software, semiconductor, and Internet firms that are the latest phase of a technology juggernaut that began here with radio in the first part of the twentieth century. Silicon Valley attracts a third of all the venture capital in the United States. It has the highest number of technology workers per capita of any U.S. metropolitan area, and they’re the best paid, too. The country’s greatest concentration of billionaires and millionaires call Silicon Valley home.

Here, at the epicenter of global technology, with a GPS in my rental car and another in my iPhone, I drove to Eliezer Yudkowsky’s home the old-fashioned way, with written directions. To protect his privacy, Yudkowsky had e-mailed them to me and asked me not to share them or his e-mail address. He didn’t offer his phone number.

At thirty-three, Yudkowsky, cofounder and research fellow at MIRI, has written more about the dangers of AI than anyone else. When he set out on this career more than a decade ago, he was one of very few people who had made considering AI’s dangers his life’s work. And while he hasn’t taken actual vows, he forgoes activities that might take his eye off the ball. He doesn’t drink, smoke, or do drugs. He rarely socializes. He gave up reading for fun several years ago. He doesn’t like interviews, and prefers to do them on Skype with a thirty-minute time limit. He’s an atheist (the rule not the exception among AI experts) so he doesn’t squander hours at a temple or a church. He doesn’t have children, though he’s fond of them, and thinks people who haven’t signed their children up for cryonics are lousy parents.

But here’s the paradox. For someone who supposedly treasures his privacy, Yudkowsky has laid bare his personal life on the Internet. I found, after my first attempts to track him down, that in the corner of the Web where discussions of rationality theory and catastrophe live, he and his innermost musings are unavoidable.

His ubiquity is how I came to know that at age nineteen, in their hometown of Chicago, his younger brother, Yehuda, killed himself. Yudkowsky’s grief came out in an online rant that still seems raw almost a decade later. And I learned that since dropping out of school in eighth grade, he has taught himself mathematics, logic, science history, and whatever else he felt compelled to know on an “as needed” basis. The other skills he’s acquired include delivering compelling talks and writing dense, often funny prose:

I’m a great fan of Bach’s music, and believe that it’s best rendered as techno electronica with heavy thumping beats, the way Bach intended.

Yudkowsky is a man in a hurry, because his job comes with an expiration date: when someone creates AGI. If researchers build it with the proper Yudkowsky-inspired safeguards, he may have saved mankind and maybe more. But if an intelligence explosion kicks in and Yudkowsky has been unsuccessful in implementing safeguards, there’s a good chance we’ll all be goo, and so will the universe. That puts Yudkowsky at the dead center of his own cosmology.

I had come here to learn more about Friendly AI, a term he coined. According to Yudkowsky, Friendly AI is the kind that will preserve humanity and our values forever. It doesn’t annihilate our species or spread into the universe like a planet-eating space plague.

But what is Friendly AI? How do you create it?

I also wanted to hear about the AI Box Experiment. I especially wanted to know, as he role-played the part of the AGI, how he talked the Gatekeeper into setting him free. Someday I expect that you, someone you know, or someone a couple of people removed from you, will be in the Gatekeeper’s seat. He or she needs to know what to anticipate, and how to resist. Yudkowsky might know.

*   *   *

Yudkowsky’s condo is an end unit in a horseshoe of two-story garden apartments with a pond and electric waterfall in the central courtyard. Inside, his apartment is spotless and airy. A PC and monitor dominate the breakfast island, where he’d planted a sole padded barstool from which he could look out onto the courtyard. From here he does his writing.

Yudkowsky is tall, nearly six feet and leaning toward endomorphism—that is, he’s round, but not fat. His gentle, disarming manners were a welcome change from the curt, one-line e-mails that had been our relationship’s thin thread.

We sat on facing couches. I told Yudkowsky my central fear about AGI is that there’s no programming technique for something as nebulous and complex as morality, or friendliness. So we’ll get a machine that’ll excel in problem solving, learning, adaptive behavior, and commonsense knowledge. We’ll think it’s humanlike. But that will be a tragic mistake.

Yudkowsky agreed. “If the programmers are less than overwhelmingly competent and careful about how they construct the AI then I would fully expect you to get something very alien. And here’s the scary part. Just like dialing nine-tenths of my phone number correctly does not connect you to someone who is 90 percent similar to me. If you are trying to construct the AI’s whole system and you get it 90 percent right, the result is not 90 percent good.”

In fact, it’s 100 percent bad. Cars aren’t out to kill you, Yudkowsky analogized, but their potential deadliness is a side effect of building cars. It would be the same with AI. It wouldn’t hate you, but you are made of atoms it may have other uses for, and it would, Yudkowsky said, “… tend to resist anything you did to try and keep those atoms to yourself.” So, a side effect of thoughtless programming is that the resulting AI will have a galling lack of propriety about your atoms.

And neither the public nor the AI’s developers will see the danger coming until it’s too late.

“Here is this tendency to think that well-intentioned people create nice AIs, and badly intentioned people create evil AIs. This is not the source of the problem. The source of the problem is that even when well-intentioned people set out to create AIs they are not very concerned with Friendly AI issues. They themselves assume that if they are good-intentioned people the AIs they make are automatically good intentioned, and this is not true. It’s actually a very difficult mathematical and engineering problem. I think most of them are just insufficiently good at thinking of uncomfortable thoughts. They started out
not
thinking, ‘Friendly AI is a problem that will kill you.’”

Yudkowsky said that AI makers are infected by the idea of a blissful AI-enhanced future that lives in their imaginations. They have been thinking about it since the AI bug first bit them.

“They do not want to hear anything that contradicts that. So if you present unfriendly AI to them it bounces off. As the old proverb goes, most of the damage is done by people who wish to feel themselves important. Many ambitious people find it far less scary to think about destroying the world than to think about never amounting to much of anything at all.
All
the people I have met who think they are going to win eternal fame through their AI projects have been like this.”

Other books

Revenge by Martina Cole
To Trust a Stranger by Karen Robards
Fade to Black - Proof by Jeffrey Wilson
Maggie's Desire by Heidi Lynn Anderson
The Royal Family by William T. Vollmann
A 1980s Childhood by Michael A. Johnson
Silence Once Begun by Jesse Ball


readsbookonline.com Copyright 2016 - 2024