Read Our Final Invention: Artificial Intelligence and the End of the Human Era Hardcover Online
Authors: James Barrat
Imagine a native English speaker:
Searle, John, “Minds, Brains and Programs,”
Behavioral and Brain Sciences
, 3 (1980): 417–57.
how can we claim humans “understand” language?:
This idea comes from a personal communication from Dr. Richard Granger, July 24, 2012.
Kurzweil writes:
Kurzweil, Ray,
The Singularity Is Near: When Humans Transcend Biology
(New York: Viking, 2005), 29.
4: THE HARD WAY
He doesn’t have children:
Cryonics is the study of preserving things at low temperatures, in this instance dead humans for future repair and revival.
Yudkowsky’s grief came out:
Yudkowsky, Eliezer, “Yehuda Yudkowsky, 1985–2004,” 2004,
http://yudkowsky.net/other/Yehuda
(accessed June 1, 2011).
I’m a great fan of Bach’s music:
okcupid, “EYudkowsky,” last modified 2012,
http://www.okcupid.com/profile/EYudkowsky
(accessed June 14, 2012).
They do not want to hear:
Baez, John, “Interview with Eliezer Yudkowsky,”
Azimuth
(blog), March 25, 2011,
http://johncarlosbaez.wordpress.com/2011/03/25/this-weeks-finds-week-313/
(accessed June 14, 2012).
The human species came into existence:
Yudkowsky, Eliezer, “Artificial Intelligence as a Positive and Negative Factor in Global Risk,” August 31, 2006,
http://intelligence.org/files/AIPosNegFactor.pdf
(accessed February 28, 2013).
it only takes one error:
Baez, “Interview with Eliezer Yudkowsky.”
Friendly AI pursues goals:
Yudkowsky, Eliezer, “Creating Friendly AI 1.0: The Analysis and Design of Benevolent Goal Architectures,” 2001,
http://intelligence.org/files/CFAI.pdf
(accessed March 4, 2013).
transforming first all of earth:
Bostrom, Nick, Oxford University, “Ethical Issues in Advanced Artificial Intelligence,” last modified 2003,
http://www.nickbostrom.com/ethics/ai.html
(accessed June 14, 2012).
We want a moving scale:
Machine Intelligence Research Institute, “Reducing long-term catastrophic risks from artificial intelligence,” 2009,
http://intelligence.org/files/ReducingRisks.pdf
(accessed March 3, 2013).
knew more, thought faster:
Yudkowsky, Eliezer, “Coherent Extrapolated Volition,” May 2004,
http://intelligence.org/files/CEV.pdf
(accessed March 3, 2013).
And it’s surpassed:
Grenemeier, Larry, “Computers have a lot to learn from the human brain, engineers say,”
Scientific American
, March 10, 2009,
http://www.scientificamerican.com/blog/post.cfm?id=computers-have-a-lot-to-learn-from-2009-03-10
(accessed May 18, 2011).
MIRI President Michael Vassar:
In January 2012 Michael Vassar resigned as president of MIRI to cofound Meta Med, a start-up offering personalized evidence-based diagnostics and treatment. He was replaced by Luke Muehlhauser.
At the height of:
“The Inside Story of the SWORDS Armed Robot ‘Pullout’ in Iraq: Update,”
Popular Mechanics
, October 1, 2009,
http://www.popularmechanics.com/technology/gadgets/4258963
(accessed May 18, 2011).
In 2007 in South Africa:
Shachtman, Noah, “Inside the Robo-Cannon Rampage (Updated),”
WIRED
, October 19, 2007,
http://www.wired.com/dangerroom/2007/10/inside-the-robo/
(accessed May 18, 2011).
Gandhi doesn’t want to kill people:
Yudkowsky, Eliezer, “Singularity,”
http://yudkowsky.net/singularity
(accessed June 15, 2012).
And so
we
are:
But wait—wasn’t that the same kind of anthropomorphizing Yudkowsky had pinned on me? As humans, our basic human goals drift, over generations, even within a lifetime. But would a machine’s? I think Hughes uses the analogy of a human in a proper, nonanthropomorphizing way. That is, we humans are an existence proof of something with deeply embedded utility functions, such as the drive to reproduce, yet we can override them. It’s similar to Yudkowsky’s Gandhi analogy, which isn’t anthropomorphizing either.
Between 2002 and 2005:
Yudkowsky, Eliezer, “Shut Up and Do the Impossible,”
Less Wrong
(blog), October 8, 2008,
http://lesswrong.com/lw/up/shut_up_and_do_the_impossible/
(accessed May 18, 2010). From this starting point you can start a search on the AI-Box Experiment, and learn almost everything about it I did.
May not machines carry out:
Turing, A. M., “Computing Machinery and Intelligence,”
Mind
, 49 (1950):433–460.
Marvin Minsky, one of the founders:
Newsgroups, “comp.ai,comp.ai.philosophy.” Last modified March 30, 1995,
http://loebner.net/Prizef/minsky.txt
(accessed July 18, 2011).
5: PROGRAMS THAT WRITE PROGRAMS
… we are beginning to depend:
Hillis, Danny, “The Big Picture,”
WIRED
, June 1, 1998.
Surely no harm could come:
Omohundro, Stephen, “The Basic AI Drives.” November 30, 2007,
http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf
(accessed June 1, 2011).
For Omohundro the conversation:
Omohundro, Stephen, “Self-Improving AI and the Future of Computation,” paper presented at the Stanford EE380 Computer Systems Colloquium, Wednesday, October 24, 2007,
http://selfawaresystems.com/2007/11/01/stanford-computer-systems-colloquium-self-improving-ai-and-the-future-of-computing/
(accessed May 18, 2011).
The National Institute of Standards and Technology:
Thibodeau, Patrick, “Study: Buggy software costs users, vendors nearly $60B annually,”
Computerworld,
June 25, 2002,
http://www.computerworld.com/s/article/72245/Study_Buggy_software_costs_users_vendors_nearly_60B_annually
(accessed June 1, 2011).
In the simplest sense:
Luger, George F.,
Artificial Intelligence: Structures and Strategies for Complex Problem Solving
(New York: Addison-Wesley, 2002), 352.
Mysteriously, however, no one:
Koza, John R., Martin A. Keane, and Matthew J. Streeter, “Evolving Inventions,”
Scientific American,
February 2003.
6: FOUR BASIC DRIVES
We won’t really be able to understand:
Kevin Warwick (cybernetics expert), interview by Kevin Gumbs,
Building Gods
, documentary film, Podcast Video, 2008,
http://topdocumentaryfilms.com/building-gods/
(accessed June 13, 2011).
To increase their chances:
Omohundro, Stephen, “The Basic AI Drives,” November 11, 2007,
http://selfawaresystems.com/2007/11/30/paper-on-the-basic-ai-drives/
(accessed June 21, 2011).
It has a model of its own programming:
Omohundro, Stephen, “Foresight Vision Talk: Self-Improving AI and Designing 2030,” January 21, 2008,
http://selfawaresystems.com/2007/11/30/foresight-vision-talk-self-improving-ai-and-designing-2030/
(accessed June 22, 2011).
Suppose, Omohundro says:
Omohundro, Stephen, “The Nature of Self-Improving Artificial Intelligence,” January 21, 2008,
http://selfawaresystems.files.wordpress.com/2008/01/nature_of_self_improving_ai.pdf
(accessed June 22, 2011).
And remarkably, if nanotech:
Ibid.
A self-aware system:
Ibid.
When people are surrounded:
de Garis, Hugo, “The Artilect War: Cosmists vs. Terrans,” 2008,
http://agi-conf.org/2008/artilectwar.pdf
(accessed June 22, 2011).
Will the robots become smarter:
Ibid.
Humans should not stand in the way:
Kristof, Nicholas D., “Robokitty,”
New York Times Magazine,
August 1, 1999.
In fact, de Garis:
De Garis, Hugo, Brain Builder Group, Evolutionary Systems Department, ATR Human Information Processing Research Laboratories, “CAM-BRAIN The Evolutionary Engineering of a Billion Neuron Artificial Brain by 2001 which Grows/Evolves at Electronic Speeds inside a Cellular Automata Machine (CAM),” last modified 1995,
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.48.8902
(accessed June 22, 2011).
a system will consider stealing:
Omohundro, “Foresight Vision Talk: Self-Improving AI and Designing 2030.”
They are going to want:
Omohundro, “The Nature of Self-Improving Artificial Intelligence.”
There is a first-mover advantage:
Ibid.
That’s because M13 will:
Steele, Bill,
Cornell News,
“It’s the 25th anniversary of Earth’s first (and only) attempt to phone E.T.,” last modified November 12, 1999,
http://web.archive.org/web/20080802005337/http://www.news.cornell.edu/releases/Nov99/Arecibo.message.ws.html
(accessed July 2, 2011).
I think we could spend:
Kazan, Casey, “The Search for ET: Should It Focus on Hot Stars, Black Holes and Neutron Stars?”
The Daily Galaxy,
October 4, 2010,
http://www.dailygalaxy.com/my_weblog/2010/10/the-search-for-et-should-it-focus-on-hot-stars-black-holes-and-neutron-stars-todays-most-popular.html
(accessed July 2, 2011).
One frigid example is Bok globules:
Ibid.
And don’t forget:
Yudkowsky, Eliezer, “Artificial Intelligence as a Positive and Negative Factor in Global Risk,” Sept. 2008,
http://intelligence.org/files/AIRisk.pdf
(accessed March 3, 2013).
The 1986 Chernobyl:
INSAG-7 The Chernobyl Accident: Updating of INSAG-1
(Vienna: International Atomic Energy Agency, 1992),
http://www-pub.iaea.org/MTCD/publications/PDF/Pub913e_web.pdf
(accessed July 2, 2011).
We have produced designs so complicated:
Charles Perrow,
Normal Accidents: Living with High-Risk Technologies
(Princeton, NJ: Princeton University Press, 1999), 11.
The point of HFTs:
CBS News, “How Speed Traders Are Changing Wall Street,”
60 Minutes
, October 11, 2010,
http://www.cbsnews.com/stories/2010/10/07/60minutes/main6936075.shtml
(accessed July 3, 2011).
After the sale, the price:
Cohan, Peter, “The 2010 Flash Crash: What Caused It and How to Prevent the Next One,”
Daily Finance,
August 18, 2010,
http://www.dailyfinance.com/2010/08/18/the-2010-flash-crash-what-caused-it-and-how-to-prevent-the-next/
(accessed July 3, 2011).
The lower price automatically:
Nanex, “Analysis of the ‘Flash Crash,’” last modified June 18, 2010,
http://www.nanex.net/20100506/FlashCrashAnalysis_CompleteText.html
.
not only unexpected:
Perrow, Charles,
Normal Accidents,
8.
We know that a lot of algorithms:
“The Market’s Black Box: Engine for Efficiency or Ever-Growing Monster?”
Paris Tech Review,
August 25, 2010,
http://www.paristechreview.com/2010/08/25/market-black-box-efficiency-growing-monster/
(accessed July 2, 2011).
That monster struck again:
Matthews, Christopher, “High Frequency Trading: Wall Street Doomsday Machine?”
Time
, August 8, 2012,
http://business.time.com/2012/08/08/high-frequency-trading-wall-streets-doomsday-machine/
(accessed September 7, 2012).
An agent which sought only:
Omohundro, “The Nature of Self-Improving Artificial Intelligence.”
The AI’s fourth drive:
Ibid.
On Omohundro’s wish list:
Ibid.
With both logic and inspiration:
Ibid.
7: THE INTELLIGENCE EXPLOSION
From the standpoint of existential risk:
Yudkowsky, Eliezer, “Artificial Intelligence as a Positive and Negative Factor in Global Risk,” Sept. 2008,
http://intelligence.org/files/AlPosNegFactor.pdf
(accessed March 3, 2013).
I have a quarter-baked idea:
Banks, David L., “A Conversation with I. J. Good,”
Statistical Science
, vol. 11, no. 1 (1997), 1–19,
http://www.web-e.stat.vt.edu/holtzman/IJGood/A_Conversation_with_IJGood_Davild_L_Banks_1992.pdf
(accessed July 2, 2011).
Let an ultraintelligent machine be defined:
Good, I. J., “Speculations Concerning the First Ultraintelligent Machine,” in Franz L. Alt and Morris Rubinoff, eds.,
Advances in Computers
, Vol. 6 (New York: Academic Press, 1965), 31–88.
The Singularity has three well-developed definitions:
Yudkowsky, Eliezer, Machine Intelligence Research Institute, “Three Major Singularity Schools,” last modified September 2007,
http://yudkowsky.net/singularity/schools
(accessed April 2, 2010).
As a boy Good’s father:
Banks, David L., “A Conversation with I. J. Good.”
German U-boats:
Trueman, Chris, History Learning Site, “World War Two: U-boats,” last modified 2011,
http://www.historylearningsite.co.uk/u-boats.htm
(accessed December 2, 2011).
Each key displayed a letter:
Sales, Tony, “The Principal of the Enigma,” March 2001,
http://www.codesandciphers.org.uk/enigma/enigma1.htm
(accessed September 5, 2011).
Turing and his colleagues:
Bletchley Park National Codes Center, “Machines behind the codes,” last modified 2011,
http://www.bletchleypark.org.uk/content/machines.rhtm
. (accessed September 6, 2011).
The heroes of Bletchley Park:
Hinsley, Harry, “The Influence of ULTRA in the Second World War,” Babbage Lecture Theatre, Computer Laboratory, last modified November 26, 1996,
http://www.cl.cam.ac.uk/research/security/Historical/hinsley.html
(accessed September 6, 2011).
At Bletchley Turing:
Banks, “A Conversation with I. J. Good.”