Read The Age of Spiritual Machines: When Computers Exceed Human Intelligence Online

Authors: Ray Kurzweil

Tags: #Non-Fiction, #Fringe Science, #Amazon.com, #Retail, #Science

The Age of Spiritual Machines: When Computers Exceed Human Intelligence (20 page)

BOOK: The Age of Spiritual Machines: When Computers Exceed Human Intelligence
12.78Mb size Format: txt, pdf, ePub
ads
As we are approaching more perfect harnessing of the improving density of computation, processor speeds are now effectively doubling every twelve months. This is fully feasible today when we build hardware-based neural nets because neural net processors are relatively simple and highly parallel. Here we create a processor for each neuron and eventually one for each interneuronal connection. Moore’s, Law thereby enables us to double both the number of processors as well as their speed every two years, an effective quadrupling of the number of interneuronal-connection calculations per second.
This apparent acceleration in the acceleration of computer speeds may result, therefore, from an improving ability to benefit from both strands of the Law of Accelerating Returns. When Moore’s Law dies by the year 2020, new forms of circuitry beyond integrated circuits will continue both strands of exponential improvement. But ordinary exponential growth—two strands of it—is dramatic enough. Using the more conservative prediction of just one level of acceleration as our guide, let’s consider where the Law of Accelerating Returns will take us in the twenty-first century.
The human brain has about 100 billion neurons. With an estimated average of one thousand connections between each neuron and its neighbors, we have about 100 trillion connections, each capable of a simultaneous calculation. That’s rather massive parallel processing, and one key to the strength of human thinking. A profound weakness, however, is the excruciatingly slow speed of neural circuitry, only 200 calculations per second. For problems that benefit from massive parallelism, such as neural-net-based pattern recognition, the human brain does a great job. For problems that require extensive sequential thinking, the human brain is only mediocre.
With 100 trillion connections, each computing at 200 calculations per second, we get 20 million billion calculations per second. This is a conservatively high estimate; other estimates are lower by one to three orders of magnitude. So when will we see the computing speed of the human brain in your personal computer?
The answer depends on the type of computer we are trying to build. The most relevant is a massively parallel neural net computer. In 1997, $2,000 of neural computer chips using only modest parallel processing could perform around 2 billion connection calculations per second. Since neural net emulations benefit from both strands of the acceleration of computational power, this capacity will double every twelve months. Thus by the year 2020, it will have doubled about twenty-three times, resulting in a speed of about 20 million billion neural connection calculations per second, which is equal to the human brain.
If we apply the same analysis to an “ordinary” personal computer, we get the year 2025 to achieve human brain capacity in a $1,000 device.
2
This is because the general-purpose type of computations that a conventional personal computer is designed for are inherently more expensive than the simpler, highly repetitive neural-connection calculations. Thus I believe that the 2020 estimate is more accurate because by 2020, most of the computations performed in our computers will be of the neural-connection type.
The memory capacity of the human brain is about 100 trillion synapse strengths (neurotransmitter concentrations at interneuronal connections), which we can estimate at about a million billion bits. In 1998, a billion bits of RAM (128 megabytes) cost about $200. The capacity of memory circuits has been doubling every eighteen months. Thus by the year 2023, a million billion bits will cost about $1,000.
3
However, this silicon equivalent will run more than a billion times faster than the human brain. There are techniques for trading off memory for speed, so we can effectively match human memory for $1,000 sooner than 2023.
THE EXPONENTIAL GROWTH OF COMPUTING, 1900-2100
Taking all of this into consideration, it is reasonable to estimate that a $1,000 personal computer will match the computing speed and capacity of the human brain by around the year 2020, particularly for the neuron-connection calculation, which appears to comprise the bulk of the computation in the human brain. Supercomputers are one thousand to ten thousand times faster than personal computers. As this book is being written, IBM is building a supercomputer based on the design of Deep Blue, its silicon chess champion, capable of 10 teraflops (that is, 10 trillion calculations per second), only 2,000 times slower than the human brain. Japan’s Nippon Electric Company hopes to beat that with a 32-teraflop machine. IBM then hopes to follow that with 100 teraflops by around the year 2004 (just what Moore’s Law predicts, by the way). Supercomputers will reach the 20 million billion calculations per second capacity of the human brain around 2010, a decade earlier than personal computers.
4
In another approach, projects such as Sun Microsystems’ Jini program have been initiated to harvest the unused computation on the Internet. Note that at any particular moment, the significant majority of the. computers on the Internet are not being used. Even those that are being used are not being used to capacity (for example, typing text uses less than one percent of a typical notebook computer’s computing capacity). Under the Internet computation harvesting proposals, cooperating sites would load special software that would enable a virtual massively parallel computer to be created out of the computers on the network. Each user would still have priority over his or her own machine, but in the background, a significant fraction of the millions of computers on the Internet would be harvested into one or more supercomputers. The amount of unused computation on the Internet today exceeds the computational capacity of the human brain, so we already have available in at least one form the hardware side of human intelligence. And with the continuation of the Law of Accelerating Returns, this availability will become increasingly ubiquitous.
After human capacity in a $1,000 personal computer is achieved around the year 2020, our thinking machines will improve the cost performance of their computing by a factor of two every twelve months. That means that the capacity of computing will double ten times every decade, which is a factor of one thousand (2
10
) every ten years. So your personal computer will be able to simulate the brain power of a small village by the year 2030, the entire population of the United States by 2048, and a trillion human brains by 2060.
5
If we estimate the human Earth population at 10 billion persons, one penny’s worth of computing circa 2099 will have a billion times greater computing capacity than all humans on Earth.
6
Of course I may be off by a year or two. But computers in the twenty-first century will not be wanting for computing capacity or memory.
Computing Substrates in the Twenty-First Century
 
I’ve noted that the continued exponential growth of computing is implied by the Law of Accelerating Returns, which states that any process that moves toward greater order—evolution in particular—will exponentially speed up its pace as time passes. The two resources that the exploding pace of an evolutionary process—such as the progression of computer technology—requires are (1) its own increasing order, and (2) the chaos in the environment in which it takes place. Both of these resources are essentially without limit.
Although we can anticipate the overall acceleration in technological progress, one might still expect that the actual manifestation of this progression would still be somewhat irregular. After all, it depends on such variable phenomena as individual innovation, business conditions, investment patterns, and the like. Contemporary theories of evolutionary processes, such as the Punctuated Equilibrium theories,
7
posit that evolution works by periodic leaps or discontinuities followed by periods of relative stability. It is thus remarkable how predictable computer progress has been.
So, how will the Law of Accelerating Returns as applied to computation roll out in the decades beyond the demise of Moore’s Law on Integrated Circuits by the year 2020? For the immediate future, Moore’s Law will continue with ever smaller component geometries packing greater numbers of yet faster transistors on each chip. But as circuit dimensions reach near atomic sizes, undesirable quantum effects such as unwanted electron tunneling will produce unreliable results. Nonetheless, Moore’s standard methodology will get very close to human processing power in a personal computer and beyond that in a supercomputer.
The next frontier is the third dimension. Already, venture-backed companies (mostly California-based) are competing to build chips with dozens and ultimately thousands of layers of circuitry With names like Cubic Memory, Dense-Pac, and Staktek, these companies are already shipping functional three-dimensional “cubes” of circuitry. Although not yet cost competitive with the customary flat chips, the third dimension will be there when we run out of space in the first two.
8
Computing with Light
 
Beyond that, there is no shortage of exotic computing technologies being developed in research labs, many of which have already demonstrated promising results. Optical computing uses streams of photons (particles of light) rather than electrons. A laser can produce billions of coherent streams of photons, with each stream performing its own independent series of calculations. The calculations on each stream are performed in parallel by special optical elements such as lenses, mirrors, and diffraction gratings. Several companies, including Quanta-Image, Photonics, and Mytec Technologies, have applied optical computing to the recognition of fingerprints. Lockheed has applied optical computing to the automatic identification of malignant breast lesions.
9
The advantage of an optical computer is that it is massively parallel with potentially trillions of simultaneous calculations. Its disadvantage is that it is not programmable and performs a fixed set of calculations for a given configuration of optical computing elements. But for important classes of problems such as recognizing patterns, it combines massive parallelism (a quality shared by the human brain) with extremely high speed (which the human brain lacks).
Computing with the Machinery of Life
 
A new field called molecular computing has sprung up to harness the DNA molecule itself as a practical computing device. DNA is nature’s own nanoengineered computer and it is well suited for solving combinatorial problems. Combining attributes is, after all, the essence of genetics. Applying actual DNA to practical computing applications got its start when Leonard Adleman, a University of Southern California mathematician, coaxed a test tube full of DNA molecules (see the box on page 108) to solve the well-known “traveling salesperson” problem. In this classic problem, we try to find an optimal route for a hypothetical traveler between multiple cities without having to visit a city more than once. Only certain city pairs are connected by routes, so finding the right path is not straightforward. It is an ideal problem for a recursive algorithm, although if the number of cities is too large, even a very fast recursive search will take far too long.
Professor Adleman and other scientists in the molecular-computing field have identified a set of enzyme reactions that corresponds to the logical and arithmetic operations needed to solve a variety of computing problems. Although DNA molecular operations produce occasional errors, the number of DNA strands being used is so large that any molecular errors become statistically insignificant. Thus, despite the inherent error rate in DNA’s computing and copying processes, a DNA computer can be highly reliable if properly designed.
DNA computers have subsequently been applied to a range of difficult combinatorial problems. A DNA computer is more flexible than an optical computer but it is still limited to the technique of applying massive parallel search by assembling combinations of elements.
10
There is another, more powerful way to apply the computing power of DNA that has not yet been explored. I present it below in the section on quantum computing.
 
HOW TO SOLVE THE RAVELING-SALESPERSON PROBLEM USING A TEST TUBE OF DNA USING A TEST TUBE OF DNA
 
One of DNA’s advantageous properties is its ability to replicate itself, and the information it contains. To solve the traveling-salesperson problem, Professor Adleman performed following steps:
• Generate small strand of DNA with a unique code for each city.
• Replicate each such strand (one for each city) trillions of times using a process called “polymerase chain reaction” (PCR).
• Next, put the pools of DNA (one for each city) together in a test tube. This step uses DNA’s affinity to link strands together. Longer strands will form automatically. Each such longer srand represents a possible route of multiple cities. The samall strands representing each city link up with one another in a random fashion, so there is no mathematical certainly that a linked strand representing the correct answer (sequence of cities) will be formed. However, the number of strands is so vast that it is virtu- formed. However, the number of strands is so vast that it is virtually certain that at least one strand-and probably millions-will be formed that represent the correct answer.
The next steps use specially designed enzymes to eliminate the trillions of strands that represent the wrong answer, leaving only the strands representing the correct answer:
• Use molecules called primers to destroy those DNA strands that do not start with the start city as well as those that do not end with the end city, and replicate these surviving strands (using PCR).
• Use an enzyme reaction to eliminate those DNA strands that represent a travel path greater than the total number of cities.
• Use an enzyme reaction to destroy those strands that do not include the first city. Repeat for each of the cities.
• Now, each of the surviving strands represents the correct answer. Replicate these surviving strands (using PCR) until there
• Using a technique called electrophoresis, read out the DNA sequence of these correct strands (as a group). The readout looks like a set of distinct lines, which specifies the correct sequence of cities.
 
BOOK: The Age of Spiritual Machines: When Computers Exceed Human Intelligence
12.78Mb size Format: txt, pdf, ePub
ads

Other books

Anna Jacobs by Mistress of Marymoor
Intuition by Allenton, Kate
Fae Dominance by J. B. Miller
The Summoning by Kelley Armstrong
The Revelation by Bentley Little
My Lady's Guardian by Gayle Callen


readsbookonline.com Copyright 2016 - 2024