Our Final Invention: Artificial Intelligence and the End of the Human Era Hardcover (19 page)

Augmentation
may
occur in a future that’s better equipped to deal with it than the present, one with safeguards of a kind we can’t imagine from here. Having multiple ASIs would likely be safer than having just one. Having some way to monitor and track AIs would be better yet, and paradoxically the best “probation” agents for that would probably be other AIs. We’ll explore defenses against ASI in chapter 14. The point is, intelligence augmentation is no moral fail-safe. Superintelligence could be more lethal than any of the most highly controlled weapons and technologies that exist today.

We’ll have to develop, side by side with augmentation, a science for choosing candidates for intelligence enhancement. The Singularitarians’ conceit that anyone who can afford it will enjoy superintelligence through brain augmentation is a virtual guarantee that everyone else will have to live at the mercy of the first malevolent superintelligence achieved this way. That’s because, as we’ve discussed, there’s a decisive first-mover advantage in AGI development. Whoever initially achieves AGI will probably then create the conditions necessary for an intelligence explosion. They’ll do that because they fear their chief competitors, corporate or military, will do the same, and they don’t know how close to the finish line their competitors are. A giant gulf separates AI and AGI makers from the research on risk they should be reading. A minority of the AGI makers I’ve spoken with have read any work by MIRI, the Future of Humanity Institute, the Institute for Ethics and Emerging Technologies, or Steve Omohundro. Many don’t know there is a growing community of people concerned with the development of smarter-than-human intelligence, who’ve done important research anticipating its catastrophic dangers. Unless this awareness changes, I’ve no doubt that their sprint from AGI to ASI will not be accompanied by safeguards sufficient to prevent catastrophe.

Here’s a glaring example. In August 2009 in California, the Association for the Advancement of Artificial Intelligence (AAAI) brought together a group to address the growing public fears of robots running amok, loss of privacy, and religious-sounding technological movements.

“Something new has taken place in the past five to eight years,” said organizer Eric Horvitz, a prominent Microsoft researcher. “Technologists are providing almost religious visions, and their ideas are resonating in some ways with the same idea of the Rapture.… My sense was that sooner or later we would have to make some sort of statement or assessment, given the rising voice of the technorati and people very concerned about the rise of intelligent machines.”

But despite its promise, the meeting was a missed opportunity. It wasn’t open to the public or press, and machine ethicists and other thinkers working in risk assessment were all left out. Only computer scientists were invited to the discussions. That’s a little like asking race car drivers to set urban speed limits. One subgroup labored over Isaac Asimov’s Three Laws of Robotics, a sign that their ethics discussions weren’t burdened by the volumes of work that have moved beyond those science-fiction props. Horvitz’s lean conference report expresses skepticism about an intelligence explosion, the Singularity, and loss of control of intelligent systems. Nevertheless, the conference urged further research by ethicists and psychologists, and highlighted the danger of increasingly complex and inscrutable computer systems, including “costly, unforeseen behaviors of autonomous or semi-autonomous decision-making systems.” And Carnegie Mellon University’s Tom Mitchell, creator of the DARPA-funded commonsense (and potential AGI) architecture called NELL, claimed the conference changed his mind. “I went in very optimistic about the future of AI and thinking that Bill Joy and Ray Kurzweil were far off in their predictions. The meeting made me want to be more outspoken about these issues.”

In
The Singularity Is Near
Kurzweil pitches a few solutions to the problem of runaway AI. They’re surprisingly weak, particularly coming from the spokesman who enjoys a virtual monopoly on the superintelligence pulpit. But in another way, they’re not surprising at all. As I've said, there’s an irreconcilable conflict between people who fervently desire to live forever, and anything that promises to slow, challenge, or in any way encumber the development of technologies that promote their dream. In his books and lectures Kurzweil has aimed a very small fraction of his acumen at the dangers of AI and proposed few solutions, yet he protests that he’s dealt with them at length. In New York City, in a cramped dressing room with a film crew anxiously throat-clearing outside, I asked myself, how much should we expect from one man? Is it up to Kurzweil to master the Singularity’s promise
and
the peril and spoon-feed both to us? Does he personally have to explore beyond idioms like “technology’s irreducibly two-faced nature,” and master the philosophy of survival as conceived by the likes of Yudkowsky, Omohundro, and Bostrom?

No, I don’t think so. It’s a problem we all have to confront, with the help of experts, together.

 

Chapter Eleven

A Hard Takeoff

Day by day, however, the machines are gaining ground upon us; day by day we are becoming more subservient to them; more men are daily bound down as slaves to tend them, more men are daily devoting the energies of their whole lives to the development of mechanical life. The upshot is simply a question of time, but that the time will come when the machines will hold the real supremacy over the world and its inhabitants is what no person of a truly philosophic mind can for a moment question.

—Samuel Butler, nineteenth-century English poet and author

More than any other time in history mankind faces a crossroads. One path leads to despair and utter hopelessness, the other to total extinction. Let us pray we have the wisdom to choose correctly.

—Woody Allen

I. J. Good didn’t invent the intelligence explosion any more than Sir Isaac Newton invented gravity. All he did was observe that an event he considered both inevitable and a net positive for mankind was certain to yield the kind of “ultraintelligence” we humans need to solve problems that are too difficult for us. Then, after he’d lived three more decades, Good changed his mind. We’ll make superintelligent machines in our image, he said, and they will destroy us. Why? For the same reason we’d never agree to a ban on AI research, and the same reason we’d likely give the Busy Child its freedom. For the same reason the thoroughly rational AI maker Steve Omohundro, and every other AI expert I’ve met, believe that stopping development of AGI until we know more about its dangers just won’t fly.

We won’t stop developing AGI because more than dangerous AI we fear that other nations in the world will persist with AGI research no matter what the international community says or does about it. We will believe it is wiser to beat them to the punch. We are in the middle of an intelligence race, and to the dismay of many, it’s shaping up to be a more threatening global competition than the one we seem to have just escaped, the nuclear arms race. We’ll follow policy makers and technology’s cheerleaders to our doom, in Good’s phrase, “like lemmings.”

Ray Kurzweil’s positive Singularity doesn’t require an intelligence explosion—the Law of Accelerating Returns guarantees the continued exponential growth of information technologies, including world-changing ones like AGI, and later ASI. Recall that AGI is required for a Goodian intelligence explosion. The explosion yields smarter-than-human intelligence or ASI. Kurzweil claims that AGI will be conquered, slowly at first, then all at once, by the powers of LOAR.

Kurzweil isn’t concerned about roadblocks to AGI since his preferred route is to reverse engineer the brain. He believes there’s nothing about brains, and even consciousness, that cannot be computed. In fact, every expert I’ve spoken with believes that intelligence is computable. Few believe an intelligence explosion in Good’s sense is necessary to achieve ASI after AGI is reached. Slow steady progress should do it, but, as Kurzweil insists, it probably won’t be slow or steady, but fast and accelerating.

However, an intelligence explosion may be unavoidable once almost any AGI system is achieved. When any system becomes self-aware and self-improving, its basic drives, as described by Omohundro, virtually guarantee that it will seek to improve itself again and again.

So is an intelligence explosion inevitable? Or could something stop it?

AGI defeaters cluster around two ideas: economics and software complexity. The first, economics, considers that funds won’t be available to get from narrow AI to the far more complex and powerful cognitive architectures of AGI. Few AGI efforts are well-funded. This prompts a subset of researchers to feel that their field is stuck in the endless stall of a so-called AI winter. They’ll escape if the government or a corporation like IBM or Google considers AGI a priority of the first order, and undertakes a Manhattan Project–sized effort to achieve it. During World War II, fast-tracking atomic weapons development cost the U.S. government about $2 billion dollars, in today’s valuation, and employed around 130,000 people. The Manhattan Project frequently comes up among researchers who want to achieve AGI
soon.
But who would want to take on that task, and why?

The software complexity defeater claims the problem of AGI is simply too difficult for humans, no matter how long we chip away at it. As philosopher Daniel Dennett suggests, we may not possess minds that can understand our own minds. Mankind’s intelligence probably isn’t the greatest possible. But it might require intelligence greater than our own to fathom our intelligence in full.

*   *   *

To explore the plausibility of intelligence explosion defeaters, I went to a man I kept running into at AI conferences, and whose blogs, papers, and articles I frequently read on the Web. He’s an AI maker who’s published so many essays and interviews, plus
nine
hardcover books, and countless academic papers, it wouldn’t have surprised me to discover a robot in his home in the suburbs of Washington, D.C., slaving around the clock to produce the written output of Dr. Benjamin Goertzel so Ben Goertzel could go to conferences. The twice-married father of three has served on faculty in university departments of computer science, mathematics, and psychology in the United States, Australia, New Zealand, and China. He’s the organizer of the only annual international artificial general intelligence conference, and more than anyone else he popularized the term AGI. He’s the CEO of two technology companies and one of them, Novamente, is on some AI experts’ short list for being the first to crack AGI.

Generally speaking, Goertzel’s cognitive architecture, called OpenCog, represents an engineered, computer science approach. Computer science-based researchers want to engineer AGI with architecture that works in a way similar to the way our brains work, as described by the cognitive sciences. Those include linguistics, psychology, anthropology, education, philosophy, and more. Computer science researchers believe that creating intelligence
exactly
the way brains do—reverse engineering the organ itself as recommended by Kurzweil and others—is unnecessarily time-consuming. Plus, the brain’s design is not optimal—programming can do better. After all, they reason, humans didn’t reverse engineer a bird to learn how to fly. From observing birds, and experimenting, they derived principles of flight. The cognitive sciences are the brain’s “principles of flight.”

OpenCog’s organizing theme is that intelligence is based on high-level pattern recognition. Usually, “patterns” in AI are chunks of data (files, pictures, text, objects) that have been classified—organized by category—or will be classified by a system that’s been trained on data. Your e-mail’s “spam” filter is an expert pattern recognizer—it recognizes one or more traits of unwanted e-mail (for example, the words “male enhancement” in the subject line) and segregates it.

OpenCog’s notion of pattern recognition is more refined. The pattern it finds in each thing or idea is a small program that contains a kind of description of the thing. This is the machine version of a
concept.
For example, when you see a dog you instantly grasp a lot about it—you hold a
concept
of a dog in your memory. Its nose is wet, it likes bacon, it sheds fur, it chases cats. A lot is packed inside your concept of a dog.

When OpenCog’s sensors perceive a dog, its dog
program
will instantly play, focusing OpenCog’s attention on the concept of dog. OpenCog will add more to its concept of dog based on the details of that or any particular dog.

Individual modules in OpenCog will execute tasks such as perception, focusing attention, and memory. They do it through a familiar but customized software toolkit of genetic programming and neural networks.

Then the learning starts. Goertzel plans to “grow” the AI in a virtual computer-generated world, such as Second Life, a process of reinforcement learning that could take years. Like others building cognitive architectures, Goertzel believes intelligence must be “embodied,” in a “vaguely humanlike way,” even if its body exists in a virtual world. Then this infant intelligent agent will be able to grow a collection of facts about the world it inhabits. In its learning phase, which Goertzel models on psychologist Jean Piaget’s theories of child development, the infant OpenCog might supplement what it knows by accessing one of several commercial commonsense databases.

One such giant warehouse of knowledge is called Cyc, short for en
cyc
lopedia. Created by Cycorp, Inc., it contains about a million terms, and about five million rules and relationship facts about those terms. It’s taken more than a thousand person years to hand code this body of knowledge in first-order logic, a formal system used in mathematics and computer science for representing assertions and relationships. Cyc is nothing less than a huge well of deep human knowledge—it “understands” a lot of the English language, as much as 40 percent. Cyc “knows,” for example, what a tree is, and it knows that a tree has roots. It also knows human families have roots
and
family trees. It knows that newspaper subscriptions stop when people die, and that cups can hold liquid that can be poured out quickly or slowly.

Other books

Admissions by Jennifer Sowle
Blue Knight by Tracy Cooper-Posey
The Lights of Tenth Street by Shaunti Feldhahn
Dying to Tell by T. J. O'Connor
Triple Trouble by Julia DeVillers
Execution by Hunger by Miron Dolot
Stuck in Neutral by Terry Trueman
Without Prejudice by Andrew Rosenheim


readsbookonline.com Copyright 2016 - 2024