Our Final Invention: Artificial Intelligence and the End of the Human Era Hardcover (14 page)

In the tumultuous 1960s leading up to his creating the intelligence explosion concept, he already might have been thinking about the kinds of problems an intelligent machine could help with. There were no more hostile German U-boats to sink, but there was the hostile Soviet Union, the Cuban Missile Crisis, the assassination of President Kennedy, and the proxy war between the United States and China, fought across Southeast Asia. Man skated toward the brink of extinction—it seemed time for a new Colossus. In
Speculations,
Good wrote:

[Computer pioneer] B. V. Bowden stated … that there is no point in building a machine with the intelligence of a man, since it is easier to construct human brains by the usual method … This shows that highly intelligent people can overlook the “intelligence explosion.” It is true that it would be uneconomical to build a machine capable only of ordinary intellectual attainments, but it seems fairly probable that if this could be done then, at double the cost, the machine could exhibit ultraintelligence.

So, for a few dollars more you can get ASI, artificial superintelligence, Good proposes. But then watch out for the civilization-wide ramifications of sharing the planet with smarter than human intelligence.

In 1962, before he’d written “Speculations Concerning the First Ultraintelligent Machine,” Good edited a book called
The Scientist Speculates
. He wrote a chapter entitled, “The Social Implications of Artificial Intelligence,” kind of a warm-up for the superintelligence ideas he was developing. Like Steve Omohundro would argue almost fifty years later, he noted that among the problems intelligent machines will have to address are those caused by their own disruptive appearance on Earth.

Such machines … could even make useful political and economic suggestions; and they would
need
to do so in order to compensate for the problems created by their own existence. There would be problems of overpopulation, owing to the elimination of disease, and of unemployment, owing to the efficiency of low-grade robots that the main machines had designed.

But, as I was soon to learn, Good had a surprising change of heart later in life. I had always grouped him with optimists like Ray Kurzweil, because he’d seen machines “save” the world before, and his essay hangs man’s survival on the creation of a superintelligent one. But Good’s friend Leslie Pendleton had alluded to a turnabout. It took her a while to remember the occasion, but on my last day in Blacksburg, she did.

In 1998, Good was given the Computer Pioneer Award of the IEEE (Institute of Electrical and Electronics Engineers) Computer Society. He was eighty-two years old. As part of his acceptance speech he was asked to provide a biography. He submitted it, but he did not read it aloud, nor did anyone else, during the ceremony. Probably only Pendleton knew it existed. She included a copy along with some other papers I requested, and gave them to me before I left Blacksburg.

Before taking on Interstate I-81, and heading back north, I read it in my car in the parking lot of a Rackspace Inc. cloud computing center. Like Amazon, and Google, Rackspace (corporate slogan: Fanatical Support
®
), provides massive computing power for little money by renting time on its arrays of tens of thousands of processors, and exabytes of storage space. Of course Virginia “Invent the Future” Tech would have a Rackspace facility at hand, and I wanted a tour, but it was closed. Only later did it seem eerie that a dozen yards from where I sat reading Good’s biographical notes, tens of thousands of air-cooled processors toiled away on the world’s problems.

In the bio, playfully written in the third person, Good summarized his life’s milestones, including a probably never before seen account of his work at Bletchley Park with Turing. But here’s what he wrote in 1998 about the first superintelligence, and his late-in-the-game U-turn:

[The paper] “Speculations Concerning the First Ultraintelligent Machine” (1965) … began
:
“The survival of man depends on the early construction of an ultraintelligent machine.” Those were his [Good’s] words during the Cold War, and he now suspects that “survival” should be replaced by “extinction.” He thinks that, because of international competition, we cannot prevent the machines from taking over. He thinks we are lemmings. He said also that “probably Man will construct the
deus ex machina
in his own image.”

I read that and stared dumbly at the Rackspace building. As his life wound down, Good had revised more than his belief in the probability of God’s existence. I’d found a message in a bottle, a footnote that turned everything around. Good and I had something important in common now. We both believed the intelligence explosion wouldn’t end well.

 

Chapter Eight

The Point of No Return

But if the technological Singularity can happen, it will. Even if all the governments of the world were to understand the “threat” and be in deadly fear of it, progress toward the goal would continue. In fact, the competitive advantage—economic, military, even artistic—of every advance in automation is so compelling that passing laws, or having customs, that forbid such things merely assures that someone else will.

—Vernor Vinge,
The Coming Technological Singularity,
1993

This quotation sounds like a fleshed-out version of I. J. Good’s biographical aside, doesn’t it? Like Good, two-time Hugo Award-winning science fiction author and mathematics professor Vernor Vinge alludes to humans’ lemminglike predilection to chase glory into the cannon’s mouth, to borrow Shakespeare’s phrase. Vinge told me he’d never read Good’s self-penned biographical paragraphs, or learned about his late-in-life change of heart about the intelligence explosion. Probably only Good, and Leslie Pendleton, knew about it.

Vernor Vinge was the first person to formally use the word “singularity” when describing the technological future—he did it in a 1993 address to NASA, entitled “The Coming Technological Singularity.” Mathematician Stanislaw Ulam reported that he and polymath John von Neumann had used “singularity” in a conversation about technological change thirty-five years earlier, in 1958. But Vinge’s coinage was public, deliberate, and set the singularity ball rolling into the hands of Ray Kurzweil and what is today a Singularity movement.

With that street cred, why doesn’t Vinge work the lecture and conference circuits as the ultimate Singularity pundit?

Well, singularity has several meanings, and Vinge’s usage is more precise than others. To define singularity he made an analogy to the point in the orbit of a black hole beyond which light cannot escape. You can’t see what’s going on beyond that point, called the event horizon. Similarly, once we share the planet with entities more intelligent than ourselves, all bets are off—we cannot predict what will happen. You’d have to be at least that smart yourself to know.

So, if you don’t have a good sense of how the future works out, how do you write about it? Vinge doesn’t write science fantasy—he’s what’s known as a hard sci-fi author, using real science in his fiction. The singularity left him hamstrung.

It’s a problem we face every time we consider the creation of intelligences greater than our own. When this happens, human history will have reached a kind of singularity—a place where extrapolation breaks down and new models must be applied—and the world will pass beyond our understanding.

As Vinge tells it, when he began writing science fiction in the 1960s, the science-based worlds he wrote about were forty or fifty years away. But by the 1990s the future was running
toward
him, and the rate of technological change seemed to be accelerating. He could no longer anticipate what the future would bring, because he reckoned it would soon contain greater-than-human intelligence. That intelligence, and not mankind’s, would establish the rate of technological progress. He couldn’t write about it, and nor could others.

Through the sixties and seventies and eighties, recognition of the cataclysm spread. Perhaps it was the science-fiction writers who felt the first concrete impact. After all, the “hard” science-fiction writers are the ones who try to write specific stories about all that technology may do for us. More and more, these writers felt an opaque wall across the future.

AI researcher Ben Goertzel told me, “Vernor Vinge saw its inherent unknowability very clearly when he posited the notion of the technological singularity. It’s because of that that he doesn’t go around giving speeches about it because he doesn’t know what to say. What’s he going to say? ‘Yeah I think we’re going to create technologies that will be much more capable than humans and then who knows what will happen?’”

But what about the invention of fire, agriculture, the printing press, electricity? Haven’t many technological “singularities” already occurred? Disruptive technological change is nothing new, but no one felt compelled to come up with fancy names for its occurrences. My grandmother was born before automobiles were widely used, and lived to see Neil Armstrong walk on the moon. Her name for it was the twentieth century. What makes Vinge’s transition so special?

“The secret sauce is intelligence,” Vinge told me. His voice is a rapid-fire tenor, given to laughter. “Intelligence is what makes it different, and the defining operational feature is that the prior folk can’t understand. We are in a situation that in a very brief time, just a few decades, we’ll be getting transformations that are, by analogy, of biologically large significance.”

Two important ideas are packed into this. First, the technological singularity will bring about a change in intelligence itself, the solely human superpower that creates technology to begin with. That’s why it’s different from any other revolution. Second, the biological transformation Vinge alludes to is when mankind took the world stage some two hundred thousand years ago. Because he was more intelligent than any other species, Homo sapiens, or “wise man,” began to dominate the planet. Similarly, minds a thousand or a million times more intelligent than man’s will change the game forever. What will happen to us?

This drew a percussive laugh from Vinge. “If I get pushed hard about questions about what the Singularity is going to be like, my most common retreat is to say, Why do you think I called it the singularity?”

But Vinge has concluded one thing about the opaque future—the Singularity is menacing, and could lead to our extinction. The author, whose 1993 speech quotes Good’s 1967 intelligence explosion paragraph in its entirety
,
points out that the famous statistician didn’t take his conclusions far enough:

Good has captured the essence of the runaway, but does not pursue its most disturbing consequences. Any intelligent machine of the sort he describes would not be humankind’s “tool”—any more than humans are the tools of rabbits or robins or chimpanzees.

That’s another apt analogy—rabbits are to humans as humans will be to superintelligent machines. And how do we treat rabbits? As pests, pets, or dinner. ASI agents will be our tools at first—their ancestors Google, Siri, and Watson are now. And, Vinge suggests, there are more ways besides stand-alone machine intelligence that could cause a singularity. They include intelligence emerging from the Internet, from the Internet
plus
its users (a digital Gaia), from human-computer interfaces, and from the biological sciences (improving the intelligence of future generations through gene manipulation).

In three of these routes, humans stay involved throughout the technologies’ development, perhaps guiding a gradual and manageable intelligence enhancement rather than an explosion. So it’s possible, Vinge says, to consider how mankind’s greatest problems—hunger, disease, even death itself—may be conquered. That’s the vision espoused by Ray Kurzweil and promulgated by “Singularitarians.” Singularitarians are those who anticipate that mostly good things will emerge from the accelerated future. Their “singularity” sounds too rosy for Vinge.

“We’re playing a very high-stakes game and the plus side of it is so optimistic that that by itself is sort of scary. A worldwide economic wind is associated with these advances in AI. And that is an extraordinary powerful force. So, there’s hundreds of thousands of people in the world, very smart people, who are working on things that lead to superhuman intelligence. And probably most of them don’t even look at it that way. They look at it as
faster
,
cheaper
,
better
,
more profitable
.”

Vinge compares it to the Cold War strategy called MAD—mutually assured destruction. Coined by acronym-loving John von Neumann (also the creator of an early computer with the winning initials, MANIAC), MAD maintained Cold War peace through the promise of mutual obliteration. Like MAD, superintelligence boasts a lot of researchers secretly working to develop technologies with catastrophic potential. But it’s like mutually assured destruction without any commonsense brakes. No one will know who is ahead, so everyone will assume someone else is. And as we’ve seen, the winner won’t take all. The winner in the AI arms race will win the dubious distinction of being the first to confront the Busy Child.

“We’ve got thousands of good people working all over the world in sort of a community effort to create a disaster,” Vinge said. “The threat landscape going forward is very bad. We’re not spending enough effort thinking about failure possibilities.”

*   *   *

Some of the other scenarios Vinge is concerned about also warrant more attention. A digital Gaia, or marriage of humans and computers, is already organizing on the Internet. What that will mean for our future is profound and far-reaching, and deserving of more books than have been written about it. IA, or intelligence augmentation, has a similar potential for disaster as stand-alone AI, mitigated somewhat by the fact that a human takes part, at least at first. But that advantage will quickly disappear. We’ll talk more about IA later. First, I want to pay attention to Vinge’s notion that intelligence could emerge from the Internet.

Other books

What Stays in Vegas by Adam Tanner
His Black Pearl by Colette Howard
The Tall Man by Chloe Hooper
The Desirable Duchess by Beaton, M.C.
If Hitler Comes by Christopher Serpell
Prescription for Chaos by Christopher Anvil
Captives by Murdoch, Emily


readsbookonline.com Copyright 2016 - 2024