Read Our Final Invention: Artificial Intelligence and the End of the Human Era Hardcover Online
Authors: James Barrat
I won’t say that what Turing did:
McKittrick, David, “Jack Good: Cryptographer whose work with Alan Turing at Bletchley Park was crucial to the War effort,”
The Independent
, sec. obituaries, May 14, 2009,
http://www.independent.co.uk/news/obituaries/jack-good-cryptographer-whose-work-with-alan-turing-at-bletchley-park-was-crucial-to-the-war-effort-1684506.html
(accessed September 5, 2011).
In 1957, MIT psychologist:
McCorduck, Pamela,
Machines Who Think, A Personal Inquiry into the History and Prospects of Artificial Intelligence
(San Francisco: W. H. Freeman & Company, 1979), 87–90.
I thought neural networks:
Ibid.
The first talks:
Good based the essay on talks he gave in 1962 and 1963.
Neuroscientist, cognitive scientist:
Goertzel, Ben, and Cassio Pennachin, eds.,
Artificial General Intelligence
(Berlin/New York: Springer, 2007), 18.
B. V. Bowden stated:
Good, “Speculations Concerning the First Ultraintelligent Machines.”
Such machines … could even:
Good, I. J., ed.,
The Scientist Speculates, an Anthology of Partly Baked Ideas
(London: William Heinemann, Ltd. 1962.)
Speculations Concerning:
Good, I. J.,
The 1998 “Computer Pioneer Award” of the IEEE Computer Society,
Biography and Acceptance Speech (1998), 8.
8: THE POINT OF NO RETURN
But if the technological Singularity:
Vinge, Vernor, “The Coming Technological Singularity,” 1993,
http://www-rohan.sdsu.edu/faculty/vinge/misc/WER2.html
.
This quotation sounds a lot:
Could Good have read Vinge’s essay, inspired by his own earlier essay, and then had a change of heart? I find that unlikely. By his death Good had published some three million words of scholarship. He’s the most prolific attributer I’ve ever read. And even though many of his footnotes cite his own papers, I believe he would have given credit to Vinge for his change of heart, if Vinge’s essay had prompted it. Good would have delighted in that kind of literary recursion.
It’s a problem we face every time:
Vinge, Vernor,
True Names and Other Dangers
(Wake Forest: Baen Books, 1987), 47.
Through the sixties and seventies:
Vinge, “The Coming Technological Singularity.”
Good has captured the essence of the runaway:
Ibid.
Technology thinkers including:
Kelly, Kevin, “Q&A: Hacker Historian George Dyson Sits Down With Wired’s Kevin Kelly,”
WIRED
, February 17, 2012,
http://www.wired.com/magazine/2012/02/ff_dysonqa/all/
(accessed June 5, 2012).
At his home in California:
Wisegeek, “How Big is the Internet?” last modified 2012,
http://www.wisegeek.com/how-big-is-the-internet.htm
(accessed July 5, 2012).
Per dollar spent:
Kurzweil, Ray,
The Age of Spiritual Machines
(New York: Viking Penguin, 1999), 101–105.
9: THE LAW OF ACCELERATING RETURNS
Computing is undergoing:
King, Rachael, “IBM training computer chip to learn like a human,”
SFGate.com
,
November 7, 2011,
http://articles.sfgate.com/2011-11-07/business/30371975_1_computers-virtual-objects-microsoft
(accessed January 5, 2012).
What, then, is the Singularity:
Kurzweil, Ray,
The Singularity Is Near, When Humans Transcend Biology
(New York: Viking Press, 2005), 7.
Consider J. K. Rowling’s Harry Potter:
Ibid. 4.
Surprisingly, however, he was indirectly responsible:
Joy, Bill, “Why the Future Doesn’t Need Us,”
Wired,
August 4, 1999.
But now, with the prospect of human-level computing power:
Ibid.
Book jacket flap copy
for
The Age of Spiritual Machines
(1999).
And as scholar:
Grassie, William, “H-: Millennialism at the Singularity: Reflections on Metaphors, Meanings, and the Limits of Exponential Logic,”
Metanexus
, August 9, 2001,
http://www.metanexus.net/essay/h-millennialism-singularity-reflections-metaphors-meanings-and-limits-exponential-logic
(accessed December 10, 2011).
In 1971, 2,300 transistors:
Kanellos, Michael, “Moore’s Law to roll on for another decade,”
CNET News
, February 10, 2003,
http://news.cnet.com/2100-1001-984051.html
(accessed May 5, 2011).
Here’s a dramatic case in point:
Markoff, John, “The iPad in Your Hand: As Fast as a Supercomputer of Yore,”
New York Times
, May 9, 2011,
http://bits.blogs.nytimes.com/2011/05/09/the-ipad-in-your-hand-as-fast-as-a-supercomputer-of-yore/(accessed
June 25, 2011).
Only Kurzweil would’ve been so bold:
Kurzweil, Ray, “How My Predictions Are Faring,”
KurzweilAI.Net
(blog), October 2010,
http://www.kurzweilai.net/predictions/download.php
(accessed August 5, 2011).
Computing speed doubles every two years:
Yudkowsky, Eliezer, “Staring into the Singularity,”
Eliezer S. Yudkowsky
(blog), November 18, 1996,
http://yudkowsky.net/obsolete/singularity.html
(accessed September 5, 2011).
3-D processor chips developed by Switzerland’s École Polytechnique Fédérale de Lausanne (EPFL):
“News Mediacom,” last modified January 25, 2012,
http://actu.epfl.ch/news/jumpstarting-computers-with-3d-chips/
(accessed June 5, 2012).
Intel’s new Tri-Gate transistors:
Poeter, Damon, and Mark Hachman, “Next Intel Chips Will Have the World’s First ‘3D’ Transistors.”
PCMAG.COM
, May 4, 2011,
http://www.pcmag.com/article2/0,2817,2384897,00.asp
(accessed September 5, 2011).
Recently Google’s cofounder Larry Page:
Feeney, Lauren, “Futurist Ray Kurzweil isn’t worried about climate change,”
PBS.ORG
Need to Know
, February 16, 2011,
http://www.pbs.org/wnet/need-to-know/environment/futurist-ray-kurzweil-isnt-worried-about-climate-change/7389/
(accessed September 5, 2011).
We now have the actual means:
Ibid.
Kurzweil writes that the brain has about 100:
Dartmouth University computational neuroscientist Rick Granger claims each neuron in the brain is connected to many tens of thousands of other neurons. This would make the brain much faster than Kurzweil estimated in
The Age of Spiritual Machines
and
The Singularity Is Near.
If it’s much faster, it’s computer equivalent in speed is farther away. But, considering LOAR, not a lot farther.
That makes about 100 trillion interneuronal connections:
Ray Kurzweil,
The Age of Spiritual Machines
(New York: Viking Penguin, 1999), 103.
The title of fastest supercomputer:
Bodkin, John, “With 16 petaflops and 1.6M cores, DOE supercomputer is world’s fastest,”
Ars Technica
, June 18, 2012,
http://arstechnica.com/information-technology/2012/06/with-16-petaflops-and-1-6m-cores-doe-supercomputer-is-worlds-fastest/
(accessed September 10, 2011).
But by 2005’s
The Singularity Is Near:
Kurzweil, Ray,
The Singularity Is Near
(New York: Viking Press, 2005), 71.
Perhaps as Kurzweil says:
Kurzweil, Ray, “Response to Mitchell Kapor’s ‘Why I Think I Will Win,’”
KurzweilAI.net
, April 20, 2002,
http://www.kurzweilai.net/response-to-mitchell-kapor-s-why-i-think-i-will-win
(accessed September 5, 2011).
That means writing more complex algorithms:
Shulman, Carl, and Anders Sandberg, Machine Intelligence Research Institute, “Implications of a Software-Limited Singularity,” last modified October 31, 2010,
http://intelligence.org/files/softwarelimited.pdf
(accessed March 3, 2013).
Faster computers contribute:
Ibid.
more useful tools:
Know what else doubles about every two years? The Internet, and all the components that make it faster, more densely connected, and able to assimilate more data. In 2009 Google estimated that the Internet contained about five million terabytes of data—that’s 250,000 times more information than all the books in the Library of Congress. By 2011 it will contain about 500,000 times more data than the LoC. Harris Interactive, an Internet-based market research and polling firm, announced that growth in the number of Internet
users
justifies its description as the “fastest growing technology in history.” Four years ago, in 2008, the Internet had just under 1.2 billion users worldwide. In 2010 there were more than two billion.
Kurzweil writes that:
Kurzweil,
The Singularity Is Near
, 2005.
That means plugging:
National Institute on Deafness and Other Communication Disorders, “More About Cochlear Implants,” last modified June 7, 2010,
http://www.nidcd.nih.gov/health/hearing/pages/coch_moreon.aspx
(accessed September 15, 2011).
10: THE SINGULARITARIAN
In contrast with our intellect:
McAuliffe, Wendy, “Hawking warns of AI world takover,”
ZDNet,
September 3, 2001,
http://www.zdnet.co.uk/news/application-development/2001/09/03/hawking-warns-of-ai-world-takeover-2094424/
(accessed September 5, 2011).
Within thirty years, we will have the technological means:
Vinge, Vernor, “The Coming Technological Singularity,” 2003,
http://www-rohan.sdsu.edu/faculty/vinge/misc/WER2.html
(accessed September 5, 2011).
If the consequences of an action:
Kurzweil, Ray,
The Singularity Is Near
(New York: Viking Press, 2005), 403.
As Steve Omohundro warns:
Omohundro, Stephen, “The Basic AI Drives,” November 11, 2007,
http://selfawaresystems.com
.
it may have other uses for our atoms:
Yudkowsky, Eliezer, “Artificial Intelligence as a Positive and Negative Factor in Global Risk,” August 31, 2006,
http://intelligence.org/files/AIPosNegFactor.pdf
(accessed February 28, 2013).
We can’t just say, “we’ll put in this little software code”:
Kurzweil, Ray, “Ray Kurzweil: The H
+
Interview,”
H
+
Magazine
, December 30, 2009,
http://hplusmagazine.com/2009/12/30/ray-kurzweil-h-interview/
(accessed March 1, 2011).
our most sensitive systems, including aircraft avionics:
Ukman, Jason, and Ellen Nakashima, “24,000 Pentagon files stolen in major cyber breach official says,”
Washington Post
, sec. national, July 14, 2011,
http://www.washingtonpost.com/blogs/checkpoint-washington/post/24000-pentagon-files-stolen-in-major-cyber-breach-official-says/2011/07/14/gIQAsaaVEI_blog.html
(accessed September 28, 2011).
There’s a lot of talk about existential risk:
Kurzweil, Ray, “Ray Kurzweil: The H
+
Interview.”
exhibit unethical decision-making tendencies:
Kiff, Paul, Daniel Stancato, Stephane Cote, Rodolfo Mendoza-Denton, and Dacher Keltner, “Higher social class predicts increased unethical behavior,”
Proceedings of the National Academy of Sciences,
no. 26 (January 2012),
http://www.pnas.org/content/early/2012/02/21/1118373109.abstract
(accessed February 11, 2012).
The Singularitarians’ conceit:
Incidentally, there’s some interesting writing on the Web about the concept of a Singleton. Conceived by ethicist Nick Bostrom, a “Singleton” is a sole dominant AI in control of decisions at the highest level. See Bostrom, Nick, “What is a Singleton?” last modified 2005,
http://www.nickbostrom.com/fut/singleton.html
(accessed September 19, 2011).
Technologists are providing almost religious visions:
Markoff, John, “Scientists Worry Machines May Outsmart Man,”
New York Times
, sec. science, July 25, 2009,
http://www.nytimes.com/2009/07/26/science/26robot.html
(accessed September 25, 2011).
costly, unforeseen behaviors:
Horvitz, John, AAAI, “Interim Report from the Panel Chairs AAAI Presidential Panel on Long-Term AI Futures August 2009,” last modified 2009,
http://research.microsoft.com/en-us/um/people/horvitz/note_from_AAAI_panel_chairs.pdf
(accessed September 28, 2011).
I went in very optimistic about the future of AI:
Markoff, John, “Scientists Worry Machines May Outsmart Man.”
10: A HARD TAKEOFF
However, an intelligence explosion may be unavoidable:
I’m not sure this is true for Marcus Hutter’s AIXI, though experts tell me it is. But since AIXI is uncomputable, it would never be a candidate for an intelligence explosion anyway. AIXItl—a computable approximation of AIXI—is another matter. This is also probably not true of mind uploading, if such a thing ever comes to pass.
Computer science-based researchers want to engineer AGI:
The mind versus brain debate is too large to address here.
with $50 million in grants
: Lenat, Doug, “Doug Lenat on Cyc, a truly semantic Web, and artificial intelligence (AI),”
developerWorks,
September 16, 2008,
http://www.ibm.com/developerworks/podcast/dwi/cm-int091608txt.html
(accessed September 28, 2011).
Carnegie Mellon University’s NELL:
Lohr, Steve, “Aiming to Learn as We Do, a Machine Teaches Itself,”
New York Times,
sec. science, October 4, 2010,
http://www.nytimes.com/2010/10/05/science/05compute.html?pagewanted=all
(accessed September 28, 2011).
Many, especially those at MIRI:
Some Singularitarians want to get to AGI as soon as possible, owing to its potential to alleviate human suffering. This is Ray Kurzweil’s position. Others feel achieving AGI will move them closer to ensuring their own immortality. MIRI’s founders, including Eliezer Yudkowsky, hope AGI takes a long time because the likelihood of destroying ourselves may diminish with time due to more, better research.
these drives are: efficiency, self-preservation, resource acquisition, and creativity:
Omohundro, Stephen, “The Basic AI Drives,” November 11, 2007,
http://selfawaresystems.com/2007/11/30/paper-on-the-basic-ai-drives/
(accessed June 1, 2011)