Read The Age of Spiritual Machines: When Computers Exceed Human Intelligence Online

Authors: Ray Kurzweil

Tags: #Non-Fiction, #Fringe Science, #Amazon.com, #Retail, #Science

The Age of Spiritual Machines: When Computers Exceed Human Intelligence (14 page)

BOOK: The Age of Spiritual Machines: When Computers Exceed Human Intelligence
6.44Mb size Format: txt, pdf, ePub
ads
—Gottfried Wilhelm Leibniz
 
 
Artificial stupidity (AS) may be defined as the attempt by computer scientists to create computer programs capable of causing problems of a type normally associated with human thought.
—Wallace Marshal
 
 
Artificial intelligence (AI) is the science of how to get machines to do the things they do in the movies.
—Astro Teller
 
The Ballad of Charles and Ada
 
Returning to the evolution of intelligent machines, we find Charles Babbage sitting in the rooms of the Analytical Society at Cambridge, England, in 1821, with a table of logarithms lying before him.
“Well, Babbage, what are you dreaming about?” asked another member, seeing Babbage half asleep.
“I am thinking that all these tables might be calculated by machinery!” Babbage replied.
From that moment on, Babbage devoted most of his waking hours to an unprecedented vision: the world’s first programmable computer. Although based entirely on the mechanical technology of the nineteenth century, Babbage’s “Analytical Engine” was a remarkable foreshadowing of the modem computer.
1
Babbage developed a liaison with the beautiful Ada Lovelace, the only legitimate child of Lord Byron, the poet. She became as obsessed with the project as Babbage, and contributed many of the ideas for programming the machine, including the invention of the programming loop and the subroutine. She was the world’s first software engineer, indeed the only software engineer prior to the twentieth century.
Lovelace significantly extended Babbage’s ideas and wrote a paper on programming techniques, sample programs, and the potential of this technology to emulate intelligent human activities. She describes the speculations of Babbage and herself on the capacity of the Analytical Engine, and future machines like it, to play chess and compose music. She finally concludes that although the computations of the Analytical Engine could not properly be regarded as “thinking,” they could nonetheless perform activities that would otherwise require the extensive application of human thought.
The story of Babbage and Lovelace ends tragically. She died a painful death from cancer at the age of thirty-six, leaving Babbage alone again to pursue his quest. Despite his ingenious constructions and exhaustive effort, the Analytical Engine was never completed. Near the end of his existence he remarked that he had never had a happy day in his life. Only a few mourners were recorded at Babbage’s funeral in 1871.
2
What did survive were Babbage’s ideas. The first American programmable computer, the Mark I, completed in 1944 by Howard Aiken of Harvard University and IBM, borrowed heavily from Babbage’s architecture. Aiken commented, “If Babbage had lived seventy-five years later, I would have been out of a job.”
3
Babbage and Lovelace were innovators nearly a century ahead of their time. Despite Babbage’s inability to finish any of his major initiatives, their concepts of a computer with a stored program, self-modifying code, addressable memory, conditional branching, and computer programming itself still form the basis of computers today.
4
Again, Enter Alan Turing
 
By 1940, Hitler had the mainland of Europe in his grasp, and England was preparing for an anticipated invasion. The British government organized its best mathematicians and electrical engineers, under the intellectual leadership of Alan Turing, with the mission of cracking the German military code. It was recognized that with the German air force enjoying superiority in the skies, failure to accomplish this mission was likely to doom the nation. In order not to be distracted from their task, the group lived in the tranquil pastures of Hertfordshire, England.
Turing and his colleagues constructed the world’s first operational computer from telephone relays and named it Robinson,
5
after a popular cartoonist who drew “Rube Goldberg” machines (very ornate machinery with many interacting mechanisms). The group’s own Rube Goldberg succeeded brilliantly and provided the British with a transcription of nearly all significant Nazi messages. As the Germans added to the complexity of their code (by adding additional coding wheels to their Enigma coding machine), Turing replaced Robinson’s electromagnetic intelligence with an electronic version called Colossus built from two thousand radio tubes. Colossus and nine similar machines running in parallel provided an uninterrupted decoding of vital military intelligence to the Allied war effort.
Use of this information required supreme acts of discipline on the part of the British government. Cities that were to be bombed by Nazi aircraft were not forewarned, lest preparations arouse German suspicions that their code had been cracked. The information provided by Robinson and Colossus was used only with the greatest discretion, but the cracking of Enigma was enough to enable the Royal Air Force to win the Battle of Britain.
Thus fueled by the exigencies of war, and drawing upon a diversity of intellectual traditions, a new form of intelligence emerged on Earth.
The Birth of Artificial Intelligence
 
The similarity of the computational process to the human thinking process was not lost on Turing. In addition to having established much of the theoretical foundations of computation and having invented the first operational computer, he was instrumental in the early efforts to apply this new technology to the emulation of intelligence.
In his classic 1950 paper,
Computing Machinery and Intelligence,
Turing described an agenda that would in fact occupy the next half century of advanced computer research: game playing, decision making, natural language understanding, translation, theorem proving, and, of course, encryption and the cracking of codes.
6
He wrote (with his friend David Champernowne) the first chess-playing program.
As a person, Turing was unconventional and extremely sensitive. He had a wide range of unusual interests, from the violin to morphogenesis (the differentiation of cells). There were public reports of his homosexuality, which greatly disturbed him, and he died at the age of forty-one, a suspected suicide.
The Hard Things Were Easy
 
In the 1950s, progress came so rapidly that some of the early pioneers felt that mastering the functionality of the human brain might not be so difficult after all. In 1956, Al researchers Allen Newell, J. C. Shaw, and Herbert Simon created a program called Logic Theorist (and in 1957 a later version called General Problem Solver), which used recursive search techniques to solve problems in mathematics.
7
Recursion, as we will see later in this chapter, is a powerful method of defining a solution in terms of itself. Logic Theorist and General Problem Solver were able to find proofs for many of the key theorems in Bertrand Russell and Alfred North Whitehead’s seminal work on set theory,
Principia Mathematica,
8
including a completely original proof for an important theorem that had never been previously solved. These early successes led Simon and Newell to say in a 1958 paper, entitled
Heuristic Problem Solving: The Next Advance in Operations Research,
“There are now in the world machines that think, that learn and that create. Moreover, their ability to do these things is going to increase rapidly until—in a visible future—the range of problems they can handle will be coextensive with the range to which the human mind has been applied.”
9
The paper goes on to predict that within ten years (that is, by 1968) a digital computer would be the world chess champion. A decade later, an unrepentant Simon predicts that by 1985, “machines will be capable of doing any work that a man can do.” Perhaps Simon was intending a favorable comment on the capabilities of women, but these predictions, decidedly more optimistic than Turing’s, embarrassed the nascent Al field.
The field has been inhibited by this embarrassment to this day, and AI researchers have been reticent in their prognostications ever since. In 1997, when Deep Blue defeated Gary Kasparov, then the reigning human world chess champion, one prominent professor commented that all we had learned was that playing a championship game of chess does not require intelligence after all.
10
The implication is that capturing real intelligence in our machines remains far beyond our grasp. While I don’t wish to overstress the significance of Deep Blue’s victory, I believe that from this perspective we will ultimately find that there are no human activities that require “real” intelligence.
During the 1960s, the academic field of AI began to flesh out the agenda that Turing had described in 1950, with encouraging or frustrating results, depending on your point of view. Daniel G. Bobrow’s program Student could solve algebra problems from natural English-language stories and reportedly did well on high-school math tests.
11
The same performance was reported for Thomas G. Evans’s Analogy program for solving IQ-test geometric-analogy problems.
12
The field of expert systems was initiated with Edward A. Feigenbaum’s DENDRAL, which could answer questions about chemical compounds.
13
And natural-language understanding got its start with Terry Winograd’s SHRDLU, which could understand any meaningful English sentence, so long as you talked about colored blocks.
14
The notion of creating a new form of intelligence on Earth emerged with an intense and often uncritical passion simultaneously with the electronic hardware on which it was to be based. The unbridled enthusiasm of the field’s early pioneers also led to extensive criticism of these early programs for their inability to react intelligently in a variety of situations. Some critics, most notably existentialist philosopher and phenomenologist Hubert Dreyfus, predicted that machines would never match human levels of skill in areas ranging from the playing of chess to the writing of books about computers.
It turned out that the problems we thought were difficult—solving mathematical theorems, playing respectable games of chess, reasoning within domains such as chemistry and medicine—were easy, and the multithousand-instructions-per-second computers of the 1950s and 1960s were often adequate to provide satisfactory results. What proved elusive were the skills that any five-year-old child possesses: telling the difference between a dog and a cat, or understanding an animated cartoon. We’ll talk more about why the easy problems are hard in Part II.
Waiting for Real Artificial Intelligence
 
The 1980s saw the early commercialization of artificial intelligence with a wave of new AI companies forming and going public. Unfortunately, many made the mistake of concentrating on a powerful but inherently inefficient interpretive language called LISP, which had been popular in academic AI circles. The commercial failure of LISP and the AI companies that emphasized it created a backlash. The field of AI started shedding its constituent disciplines, and-companies in natural-language understanding, character and speech recognition, robotics, machine vision, and other areas originally considered part of the AI discipline now shunned association with the field’s label.
Machines with sharply focused intelligence nonetheless became increasingly pervasive. By the mid-1990s, we saw the infiltration of our financial institutions by systems using powerful statistical and adaptive techniques. Not only were the stock, bond, currency, commodity, and other markets managed and maintained by computerized networks, but the majority of buy-and-sell decisions were initiated by software programs that contained increasingly sophisticated models of their markets. The 1987 stock market crash was blamed in large measure on the rapid interaction of trading programs. Trends that otherwise would have taken weeks to manifest themselves developed in minutes. Suitable modifications to these algorithms have managed to avoid a repeat performance.
Since 1990, the electrocardiogram (EKG) has come complete with the computer’s own diagnosis of one’s cardiac health. Intelligent image-processing programs enable doctors to peer deep into our bodies and brains, and computerized bioengineering technology enables drugs to be designed on biochemical simulators. The disabled have been particularly fortunate beneficiaries of the age of intelligent machines. Reading machines have been reading to blind and dyslexic persons since the 1970s, and speech-recognition and robotic devices have been assisting hands-disabled individuals since the 1980s.
Perhaps the most dramatic public display of the changing values of the age of knowledge took place in the military. We saw the first effective example of the increasingly dominant role of machine intelligence in the Gulf War of 1991. The cornerstones of military power from the beginning of recorded history through most of the twentieth century—geography, manpower, firepower, and battle-station defenses—have been largely replaced by the intelligence of software and electronics. Intelligent scanning by unstaffed airborne vehicles, weapons finding their way to their destinations through machine vision and pattern recognition, intelligent communications and coding protocols, and other manifestations of the information age have transformed the nature of war.
BOOK: The Age of Spiritual Machines: When Computers Exceed Human Intelligence
6.44Mb size Format: txt, pdf, ePub
ads

Other books

Flame of the Alpha by Lacey Savage
Tales Of The Sazi 05 - Moon's Fury by C.t. Adams . Cathy Clamp
The Tiger's Egg by Jon Berkeley
All That You Are by Stef Ann Holm
Addictive Collision by Sierra Rose
The Fourth Profession by Larry Niven


readsbookonline.com Copyright 2016 - 2024