Our Final Invention: Artificial Intelligence and the End of the Human Era Hardcover (18 page)

“I’m an optimist. As an inventor I have to be.”

After his talk, Kurzweil and I sat knee to knee on metal chairs in a small dressing room a floor above the stage. A documentary film crew waited outside to get time with him after me. Just a decade before, when he had been a semifamous inventor and author, I’d monopolized him for three enjoyable hours with a film crew of my own; now he was a one-man industry whose attention I’d have as long as I could keep the door shut. I, too, was different—at our first meeting I had been gobsmacked by the idea of porting my brain to a computer, as described in
Spiritual Machines
. My questions were as tough as champagne bubbles. Now I’m more cynical, and wise to the dangers that no longer interest the master himself.

“I discuss quite a bit about peril in
The Singularity Is Near,
” Kurzweil protested when I asked if he’d overspun the Singularity’s promise and undersold its dangers. “Chapter eight is the deeply intertwined promise
and
peril in GNR [genetics, nanotechnology, and robotics] and I go into pretty graphic detail on the downsides of those three areas of technology. And the downside of robotics, which really refers to AI, is the most profound because intelligence is the most important phenomenon in the world. Inherently there is no absolute protection against strong AI.”

Kurzweil’s book does underline the dangers of genetic engineering and nanotechnology, but it gives only a couple of anemic pages to strong AI, the old name for AGI. And in that chapter he also argues that relinquishment, or turning our backs on some technologies because they’re too dangerous, as advocated by Bill Joy and others, isn’t just a bad idea, but an immoral one. I agree relinquishment is unworkable. But immoral?

“Relinquishment is immoral because it would deprive us of profound benefits. We’d still have a lot of suffering that we can overcome and therefore have a moral imperative to do that. Secondly, relinquishment would require a totalitarian system to ban the technology. And thirdly and most importantly it wouldn’t work. It would just drive the technologies underground where irresponsible practitioners would then have no limitations. Responsible scientists who are charged with developing these defenses would not have access to the tools to do that. And it would actually be more dangerous.”

Kurzweil is criticizing what’s called the Precautionary Principle, a proposition that came out of the environmental movement, which, like relinquishment, is a straw man in this conversation. But it’s important to spell out the principle, and see why it doesn’t carry weight. The Precautionary Principle states, “If the consequences of an action are unknown but judged by some scientists to have even a small risk of being profoundly negative, it’s better to not carry out the action than risk negative consequences.” The principle isn’t frequently or strictly applied. It would halt any purportedly dangerous technology if “some scientists” feared it, even if they couldn’t put their finger on the causal chain leading to their feared outcome.

Applied to AGI, the Precautionary Principle and relinquishment are nonstarters. Barring a catastrophic accident on the way to AGI that would scare us straight, both measures are unenforceable. The best corporate and government AGI projects will seek the competitive advantage of secrecy—we have seen it already in stealth companies. Few countries or corporations would surrender this advantage, even if AGI development were outlawed. (In fact, Google Inc. has the money and influence of a modern nation-state, so for an idea of what other countries will do, keep an eye on Google.) The technology required for AGI is ubiquitous and multipurpose, and getting smaller all the time. It’s difficult if not impossible to police its development.

But whether it’s immoral
not
to develop AGI, as Kurzweil states, is something else. First, AGI’s benefits would be formidable, but only if we live to enjoy them. And that’s a pretty big
if
when the system is advanced enough to foment an intelligence explosion. To argue, as Kurzweil does, that unproven benefits outweigh unproven risks is problematic. I’d told him I think it is immoral to develop technologies like AGI without simultaneously educating as many people as possible about the risks. I think the catastrophic risks of AGI, now accepted by many accomplished and respected researchers, are better established than his Singularity’s supposed benefits—nano-purified blood, better, faster brains, and immortality, for starters. The only thing certain about the Singularity is that it describes a period in which, by the power of LOAR, we’ll have fast, smart computers embedded in every facet of our lives and our bodies. Then alien machine intelligence may give our native intelligence a run for its money. Whether we like it or not will be something else. If you read Kurzweil closely you see that benefits accrue chiefly from augmentation, and augmentation is necessary for keeping up with a blisteringly fast pace of change. As I’ve argued, I think it’ll drive technological whiplash, and even rejection.

And that’s not even my main fear, because I don’t think we’ll ever get there. I think we’ll be stopped on the way by tools too powerful to control. I said this to Kurzweil and he countered with some boilerplate—the same optimistic, anthropomorphic argument he gave me ten years ago.

“To have an entity that’s very intelligent and for some reason is bent on our destruction would be a negative scenario. But you’d have to ask why would there be such a thing? First of all I would maintain it’s not us versus the machines because the machines are not in a separate civilization. It’s part of our civilization. They are tools that we use and they extend ourselves and even if we
become
the tools it still is evolving from our civilization. It’s not some alien invasion of some machines from Mars. We’re not going to wonder what are their values.”

As we’ve discussed, assuming AGI will be just like us is imputing human values into an intelligent machine that got its intelligence, and its values, in a very different manner than we did. Despite their builders’ best intentions, in most if not all AGI a great deal of how the system works will be too opaque
and
too complex for us to fully understand or predict. Alien, unknowable, and finally this—some AGIs will be created with the intent to kill humans, because, let’s not forget, in the United States, our national defense institutions are among the most active investors. We should assume that this is true in other countries as well.

I’m sure that Kurzweil has considered that AGI doesn’t have to be designed with the goal of hurting humankind in order for it to destroy humankind, and that its simple disregard will do. As Steve Omohundro warns, without careful programming, advanced AI will possess motivations and goals that we may not share. As Eliezer Yudkowsky says, it may have other uses for our atoms. And as we’ve seen, Friendly AI, which would ensure the good behavior of the first AGI and all its progeny, is a concept that’s a long way from being ready.

Kurzweil doesn’t give much time to the concept of Friendly AI. “We can’t just say, ‘we’ll put in this little software code subroutine in our AIs, and that’ll keep them safe,’” he said. “I mean it really comes down to what the goals and intentions of that artificial intelligence are. We face daunting challenges.”

It boggles the mind to consider
Un
friendly AI—AGI designed with the goal of destroying enemies, a reality we’ll soon have to face. “Why would there be such a thing?” Kurzweil asks. Because dozens of organizations in the United States will design and build it, and so will our enemies abroad. If AGI existed today, I have no doubt it would soon be implemented in battlefield robots. DARPA might insist there’s nothing to worry about—DARPA-funded AI will only kill our enemies. Its makers will install safeguards, fail-safes, dead-men switches, and secret handshakes. They will control superintelligence.

In December 2011, an Iranian with a laptop running a simple file-sharing program brought down a Sentinel drone. In July 2008, a cyberattack against the Pentagon gave invaders unfettered access to 24,000 classified documents. Former Deputy Defense Secretary William J. Lynn III told
The Washington Post
hundreds of cyberattacks against the DoD and contractors have resulted in the theft of “our most sensitive systems, including aircraft avionics, surveillance technologies, satellite communications systems, and network security protocols.” Superintelligence won’t be boxed in by anyone who can’t do something as comparatively easy as keeping human hackers out.

However, we can draw some important insights from the history of arms control. Since the creation of nuclear weapons, only the United States has used them against an enemy. Nuclear powers have managed to avoid Mutually Assured Destruction. No nuclear power has suffered accidental detonations that we know about. The record of nuclear stewardship is a good one (although the threat’s not over). But here’s my point. Too few people know that we need to have an ongoing international conversation about AGI comparable to those we have about nuclear weapons. Too many people think the frontiers of AI are delineated by harmless search engines, smart phones, and now Watson. But AGI is much closer to nuclear weapons than to video games.

AI is a “dual use” technology, a term used to describe technologies with both peaceful and military applications. For instance, nuclear fission can power cities or destroy cities (or in the cases of Chernobyl and Fukushima Daiichi, do both sequentially). Rockets developed during the space race increased the power and accuracy of intercontinental ballistic missiles. Nanotechnology, bioengineering, and genetic engineering all hold terrific promise in life-enhancing civilian applications, but all are primed for catastrophic accidents and exploitation in military and terrorist use.

When Kurzweil says he’s an optimist, he doesn’t mean AGI will prove harmless. He means he’s resigned to the balancing act humans have always performed with potentially dangerous technologies. And sometimes humans take a fall.

“There’s a lot of talk about existential risk,” Kurzweil said. “I worry that painful episodes are even more likely. You know, sixty million people were killed in World War II. That was certainly exacerbated by the powerful destructive tools that we had then. I’m fairly optimistic that we will make it through. I’m less optimistic that we can avoid painful episodes.”

“There is an irreducible promise versus peril that goes back to fire. Fire cooked our food but was also used to burn down our villages. The wheel is used for good and bad and everything in between. Technology is power, and this very same technology can be used for different purposes. Human beings do everything under the sun from making love to fighting wars and we’re going to enhance all of these activities with our technology we already have and it’s going to continue.”

Volatility is inescapable, and accidents are likely—it’s hard to argue with that. Yet the analogy doesn’t fit—advanced AI isn’t at all like fire, or any other technology. It will be capable of thinking, planning, and gaming its makers. No other tool does anything like that. Kurzweil believes that a way to limit the dangerous aspects of AI, especially ASI, is to pair it with humans through intelligence augmentation—IA. From his uncomfortable metal chair the optimist said, “As I have pointed out, strong AI is emerging from many diverse efforts and will be deeply integrated into our civilization’s infrastructure. Indeed, it will be intimately embedded in our bodies and brains. As such it will reflect our values because it will be us.”

And so, the argument goes, it will be as “safe” as we are. But, as I told Kurzweil, Homo sapiens are not known to be particularly harmless when in contact with one another, other animals, or the environment. Who is convinced that humans outfitted with brain augmentation will turn out to be friendlier and more benevolent than machine superintelligences? An augmented human, called a transhuman by those who look forward to becoming one, may sidestep Omohundro’s Basic AI Drives problem. That is, it could be self-aware and self-improving, but it would have built into it a refined set of humancentric ethics that would override the basic drives Omohundro derives from the rational economic agent model. However,
Flowers for Algernon
notwithstanding, we have no idea what happens to a human’s ethics after their intelligence is boosted into the stratosphere. There are plenty of examples of people of average intelligence who wage war against their own families, high schools, businesses, and neighborhoods. And geniuses are capable of mayhem, too—for the most part the world’s military generals have not been idiots. Superintelligence could very well be a violence multiplier. It could turn grudges into killings, disagreements into disasters, the way the presence of a gun can turn a fistfight into a murder. We just don’t know. However, intelligence augmented ASI has a biology-based aggression that machines lack. Our species has a well-established track record for self-protection, consolidating resources, outright killing, and the other drives we can only hypothesize about in self-aware machines.

And who’ll be first to “benefit” from substantial augmentation? The richest? We used to believe evil isn’t disproportionately present in wealthy people, but a recent study from the University of California at Berkeley suggests otherwise. Experiments showed that the wealthiest upper-class citizens were more likely than others to “exhibit unethical decision-making tendencies, take valued goods from others, lie in a negotiation, cheat to increase their chances of winning a prize, and endorse unethical behavior at work.” There’s no shortage of well-heeled CEOs and politicians whose rise to power seems to have been accompanied by a weakening of their moral compasses, if they had one. Will politicians or business leaders be the first whose brains are substantially augmented?

Or will the first recipients be soldiers? DARPA has been picking up the lion’s share of the tab, so it makes sense that brain augmentation will gain a first foothold on the battlefield or at the Pentagon. And DARPA will want its money back if superintelligence makes soldiers superfriendly.

Other books

The Ivory Grin by Ross Macdonald
The Devil's Garden by Edward Docx
Killing Commendatore: A novel by Haruki Murakami, Philip Gabriel, Ted Goossen
Winning the Alpha by Carina Wilder
The Life Room by Jill Bialosky
Snowman's Chance in Hell by Robert T. Jeschonek
Malavita by Dana Delamar


readsbookonline.com Copyright 2016 - 2024