Our Final Invention: Artificial Intelligence and the End of the Human Era Hardcover (34 page)

iRobot, the company he founded, manufactures weaponized robots:
Palmisano, John, “iRobot Demonstrates New Weaponized Robot,”
IEEE Spectrum
, May 30, 2010,
http://spectrum.ieee.org/automaton/robotics/military-robots/irobot-demonstrates-their-latest-war-robot
(accessed October 2, 2011).

an intelligence explosion requires AGI:
Loosemore, Richard, and Goertzel, Ben, “Why an Intelligence Explosion is Probable,”
H
+
Magazine
, March 7, 2011,
http://hplusmagazine.com/2011/03/07/why-an-intelligence-explosion-is-probable/
(accessed October 2, 2011).
self-improvement in pursuit of goals is rational behavior:
Omohundro, Stephen, “The Nature of Self-Improving Artificial Intelligence,” January 21, 2008,
http://selfawaresystems.files.wordpress.com/2008/01/nature_of_self_improving_ai.pdf
(accessed September 4, 2010).

Goertzel’s plan is to create an infantlike AI “agent”:
Dvorsky, George, “How will we build an artificial human brain?”
io9 We Come from the Future
, May 2, 2012,
http://io9.com/5906945/how-will-we-build-an-artificial-human-brain
(accessed June 2, 2012).
Prevent an intelligence explosion from occurring in this virtual world:
Hutter, Marcus, “Can Intelligence Explode?” February 2012,
singularitysummit.com.au/2012/08/can-intelligence-explode
(accessed July 3, 2012)

software that derives scientific laws from raw data:
Kelm, Brandon, “Download Your Own Robot Scientist,”
Wired,
December 3, 2009,
http://www.wired.com/wiredscience/2009/12/download-robot-scientist/(accessed
June 3, 2011).
many generations later output physical laws:
Chang, Kenneth, “Hal, Call Your Office: Computers That Act Like Physicists,”
New York Times,
April 2, 2009,
http://www.nytimes.com/2009/04/07/science/07robot.html?em
(accessed July 5, 2012).
it evolved rules about its own operation:
Johnson, George, the Alicia Patterson Foundation, “Eurisko, the Computer with a Mind of Its Own,” last modified April 6, 2011,
http://aliciapatterson.org/stories/eurisko-computer-mind-its-own
(accessed July 5, 2012).
Eurisko’s greatest success:
Ibid.

Eurisko created a rule:
Lenat, Douglas B., “EURISKO: A Program That Learns New Heuristics and Domain Concepts (The Nature of Heuristics III: Program Design and Results),”
Artificial Intelligence,
21
(1983), 61–98.

he urges programmers not to bring it back:
Yudkowsky, Eliezer, “Let’s reimplement EURISKO!”
Less Wrong
(blog), June 11, 2009,
http://lesswrong.com/lw/10g/lets_reimplement_eurisko/
(accessed June 3, 2010).

Every eight processors of its 30,000:
Mozyblog, “How Much is a Petabyte?” last modified July 2, 2009,
http://mozy.com/blog/misc/how-much-is-a-petabyte/
(accessed April 3, 2010).
as a company spokesman put it:
Brodkin, John, “$1,279-per-hour, 30,000-core cluster built on Amazon EC2 cloud,”
Ars Technica,
last modified September 21, 2011,
http://arstechnica.com/business/news/2011/09/30000-core-cluster-built-on-amazon-ec2-cloud.ars
(accessed April 3, 2012).

She selects jobs to offer a sailor:
Franklin, Stan, and F. G. Patterson, “The Lida Architecture: Adding New Modes of Learning to an Intelligent, Autonomous Software Agent,” Institute for Intelligent Systems, FedEx Institute of Technology, The University of Memphis, June 2006,
http://www.theassc.org/files/assc/zo-1010-lida-060403.pdf
(accessed February 23, 2010).
We are forming Silicon Valley’s next great company:
Stealth-Company, “Get In Early,” last modified 2008,
http://www.stealth-company.com/site/home
(accessed December 2, 2011).

DARPA funded more AI research than private corporations:
Funding a Revolution: Government Support for Computing Research
(Washington, D.C: National Academy Press, 1999), 200–205.
How is DARPA spending its money?:
Department of Defense Fiscal Year (FY) 2012 Budget Estimates Defense Advanced Research Projects Agency
(Arlington, Virginia: DOD, 2011).

The Cognitive Computing Systems program:
Ibid.

to create cognitive software systems:
SRI International, “Cognitive Assistant that Learns and Organizes,” last modified 2012,
http://www.ai.sri.com/project/CALO
. (accessed March 2, 2010).
Within its own cognitive architecture:
Ibid.

In 2008, Apple Computer bought Siri:
Schonfeld, Erick, “Silicon Valley Buzz: Apple Paid More Than $200 Million for Siri to Get into Mobile Search,”
TechCrunch,
last modified April 28, 2010,
http://techcrunch.com/2010/04/28/apple-siri-200-million/
(accessed March 10, 2011).
the Army will take iPhones into battle:
Raice, Shayndi, “Smartphones Going into Battle, Army Says,”
Digits: Technology News and Insights
(blog), December 14, 2010,
http://blogs.wsj.com/digits/2010/12/14/smartphones-going-into-battle-army-says/
(accessed March 10, 2010).

stunning implications for the world economy:
Loosemore, Richard, and Ben Goertzel, “Why an Intelligence Explosion is Probable,”
H
+
Magazine,
March 7, 2011,
http://hplusmagazine.com/2011/03/07/why-an-intelligence-explosion-is-probable/
(accessed April 2, 2011).
any limitations to the economic growth rate:
Ibid.

growth rate would be defined by the various AGI projects:
Ibid.

12: THE LAST COMPLICATION

How can we be so confident:
Hibbard, Bill, “AI is a Threat Despite Calming Voices,” last modified August 20, 2010,
http://sites.google.com/site/whibbard/g/hibbard_oped_aug2010
. (accessed June 10, 2011).
we will eventually uncover the principles:
Anissimov, Michael,
Accelerating Future
, “More Singularity Curmudgeonry from John Horgan,” last modified June 23, 2010 (accessed June 11, 2011).

Normalcy Bias:
Valentine, Pamela, and Thomas Smith, “Finding Something to Do: The Disaster Continuity Care Model,”
Brief Treatment and Crisis Intervention,
2, (Summer 2002), 183–196,
http://btci.edina.clockss.org/cgi/reprint/2/2/183.pdf
(accessed September 4, 2012).

The same restriction would apply to human-computer interfaces:
As we look into the software complexity problem, let’s also consider how long humans have been trying to scratch this particular itch. In 1956, John McCarthy, called the “father” of artificial intelligence (he coined the term) claimed the whole problem of AGI could be solved in six months. In 1970, AI pioneer Marvin Minsky said, “In from three to eight years we will have a machine with the general intelligence of an average human being.” Considering the state of the science, and with the benefits of hindsight, both men were loaded with hubris, in the Classical sense. Hubris comes from a Greek word meaning arrogance, and often, arrogance toward the gods. The sin of hubris was attributed to men who tried to act outside of human limitations. Think Icarus attempting flight, Sisyphus outwitting Zeus (for a while anyway), and Prometheus giving fire to man. Pygmalion, according to mythology, was a sculptor who fell in love with one of his statues, Galatea, Greek for “sleeping love.” Yet he suffered no cosmic comeuppance. Instead, Aphrodite, Goddess of Love, brought Galatea to life. Hephaestus, Greek God of technology, among other things, routinely built metal automatons to help with his metallurgy. He created Pandora
and
her box, and Talos, a giant made of bronze that protected Crete from pirates.

Paracelsus, the great medieval alchemist, best known for linking medicine and chemistry, allegedly fine-tuned a formula for creating humanlike creatures, and human-animal hybrids, called homunculi. Just fill a bag with human bones, hair, and sperm, then bury it in a hole along with some horse manure. Wait forty days. A humanlike infant will struggle to life, and thrive if fed on blood. It will always be tiny but it will do your bidding until it turns on you and runs away. If you’d like to mix the human with another animal, say a horse, substitute horsehair for human hair. However, while I can think of ten uses for a tiny human (cleaning heating ducts, getting dog hair out of the Roomba, and more), I can think of none for a tiny centaur.

Before MIT’s Robotics Lab and Mary Shelley’s
Frankenstein
existed, there was the Jewish tradition of the golem. Like Adam, a golem is a male creature made of earth. Unlike Adam, it was not brought to life by the breath of God, but by incantations of words and numbers uttered by rabbis (Kabbalists who believe in an orderly universe and the divinity of numbers). The name of God, written on paper and put in its mouth, kept the mute, ever-growing creature “alive.” In Jewish folklore, magic-wielding rabbis created golems to serve as valets and domestic servants. The most famous golem, named Yosele, or Joseph, was created in the sixteenth century by Prague’s chief rabbi, Yehuda Loew. In an era when Jews were accused of using the blood of Christian infants to make matzoth, Yosele kept busy rounding up gentile “blood” libelers, capturing crooks in Prague’s ghetto, and generally helping Rabbi Loew fight crime. Eventually, according to tradition, Yosele went berserk. To save his fellow Jews, the rabbi battled the golem, and removed the life-giving piece of paper from his mouth. Yosele turned back to clay. In another version, Rabbi Loew was crushed to death by the falling giant, a fitting reward for the hubristic act of creation. In yet another version, Rabbi Loew’s wife asked Yosele to fetch water. He kept at it until his creator’s house was flooded. In computer science, not knowing whether or not your program will act this way is called the “halting problem.” That is, good programs will run until their instructions tell them to stop, and in general it is impossible to know for sure if any given program will ever stop until you run it. Taken here, Rabbi Loew’s wife could have specified
how much
water to fetch, say, one hundred liters, and Yosele probably would’ve stopped after that. In this story she neglected to.

The halting problem is a real issue to programmers, who may not discover until their programs are running that an infinite loop lies hidden in the code. And an interesting thing about the halting problem is that it’s impossible to create a program that determines if the program you’ve written
has
the halting problem. That diagnostic debugger sounds plausible, but none other than Alan Turing discovered it is not (and he discovered it before there were computers
or
programming). He said the halting problem is unsolvable because if the debugger encounters a halting problem in the target program, it will succumb to the infinite loop while analyzing it, and never determine if the halting problem was there. You, the programmer, would be waiting for it to come up with the answer for the same amount of time you’d wait for the original program to halt. That is, a very long time, perhaps forever. Marvin Minsky, one of the fathers of artificial intelligence, pointed out that “any finite-state machine, if left completely to itself, will fall eventually into a perfectly periodic repetitive pattern. The duration of this repeating pattern cannot exceed the number of internal states of the machine.” Translated, that means a computer of average memory, while running a program with a halting problem, would take a very long time to fall into a pattern of repetition, which could then be detected by a diagnostic program. How long? Longer than the universe will exist, for some programs. So, for
practical
purposes, the halting problem means it is impossible to know whether any given program will finish.

Once Rabbi Loew noticed Yosele’s inability to stop, he could have fixed it with a patch (a change to its programming), in this case by taking the paper out of the giant’s mouth on which was written the name of God. In the end, Yosele was shut down and stored, it is said, in the attic of the Old New Synagogue in Prague, to come alive again at the end of days. Rabbi Loew, an actual, historical character, is buried in Prague’s Jewish Cemetery (fittingly, not so far from Franz Kafka's grave). So alive is the myth of Yosele to families of Eastern European Jewish descent that as late as the last century children were taught the rhyme that will awaken the golem in the end times.

Rabbi Loew’s fingerprints are all over the cultural descendents of the golem, from the obvious
Frankenstein,
to J.R.R. Tolkien’s
The Lord of the Rings,
to the Hal 9000 computer of Stanley Kubrick’s classic movie
2001: A Space Odyssey.
The computer science experts Kubrick recruited to advise him about the homicidal robot included Marvin Minsky and I. J. Good. Good had only recently written about the intelligence explosion, and anticipated it was within two decades away. Probably to his bemusement, advising Kubrick about Hal led to Good’s 1995 induction into the Academy of Motion Picture Arts and Sciences.

According to the history of AI author Pamela McCorduck, her interviews revealed that a handful of the pioneers of computer science and artificial intelligence believe that they were directly descended from Rabbi Loew. They include John von Neumann and Marvin Minsky.

Entrepreneur and AI maker Peter Voss:
Voss, Peter,
MIRI Interview Series
, 2011,
http://citationmachine.net/index2.php?reqstyleid=10&mode=form&rsid=&reqsrcid=ChicagoInterview&more=yes&nameCnt=1
(accessed June 10, 2010).
Google’s proprietary algorithm called PageRank:
Geordie, “Learn How Google Works: in Gory Detail,”
PPC Blog
(blog), 2011,
http://ppcblog.com/how-google-works/
(accessed October 10, 2011).

Other books

The Fifth Kingdom by Caridad Piñeiro
Lost Star by Hawke, Morgan
Sizzling Seduction by Gwyneth Bolton
The Innocents by Ace Atkins
Train to Delhi by Shiv Kumar Kumar
The Patterson Girls by Rachael Johns
The Belgravia Club by Fenton, Clarissa
The Devil's Ribbon by D. E. Meredith
The Skies Discrowned by Tim Powers


readsbookonline.com Copyright 2016 - 2024