Read Computing with Quantum Cats Online
Authors: John Gribbin
Von Neumann, with his contacts in Los Alamos, Washington, Aberdeen, Princeton and Philadelphia, was the catalyst who brought the various threads together and persuaded the authorities to continue funding computer developments after the war. But plans for the kind of machine to succeed ENIAC were drawn up by a large team at the Moore School, with Mauchly and Eckert leading the discussions and contributions from Goldstine, his wife Adele, and Arthur Burks. Von Neumann kept in touch largely by letter, corresponding with Goldstine and keeping up to date with thinking at the Moore School while making suggestions from afar. In the planned EDVAC, the various components of the computer would be separated into different unitsâa central processing unit (CPU) to do arithmetic, a memory, some sort of control system, and input/output devicesâwith data being shuffled between them as required. There would also be a fifth, less visible, component: a means of communication between the other four units, in particular the memory and the CPU, called a bus. A key feature of this kind of computer architecture is that problems are solved serially,
step by step, going through a chain of instructions in sequence. “The notion of serial operation,” says Goldstine, “was pushed to its absolute limit in the EDVAC design.” In the alternative parallel (or distributed) architecture, favored by Turing, different pieces of the problem are tackled by different parts of the machine simultaneously (in parallel). To take a simple example, if two long numbers are to be added together, a serial machine will add each digit in turn, just as we do, but a parallel machine will add all the digits at once. The serial method is slower, but needs less hardware. The parallel approach, however, avoids the problem with the serial structure that parts of the machine may be sitting idle, doing the electronic equivalent of twiddling their thumbs, while waiting for another part of the machine to finish a task. A related problem is that with such a structure there are delays caused by the constant need to shuttle data and instructions between the memory and the processor, along the bus, a deficiency which became known as “the von Neumann bottleneck.” It got this name because the Moore School plan became known as the von Neumann architecture. And that happened in somewhat acrimonious circumstances.
Working at von Neumann's direction, Goldstine took von Neumann's notes and the letters they had exchanged and prepared a document just over 100 pages long, called “First Draft of a Report on the EDVAC,” which was reproduced in limited numbers and distributed to a select few at the end of June 1945. The ideas discussed in this report were less advanced than those of Alan Turing discussed in the
previous chapter
, but were much more influential, because of the way they were promoted.
But the way they were promoted did not please everyone.
The snag was that the report appeared with only one name on it as author: that of John von Neumann. Adding injury to insult, the document was later regarded as a publication in the legal sense of the term, placing the ideas in the public domain and preventing them from being patented. And compounding the injury, it transpired that in May 1945 von Neumann had signed a lucrative consultancy deal with IBM, in exchange for which he assigned (with a few exceptions) all the rights to his ideas and inventions to them. Hardly surprising that Eckert later complained that “he sold all our ideas through the back door to IBM.”
14
Von Neumann now decided that he really wanted a computer at the IAS, where he was based, and with breathtaking insouciance asked the Moore School team to join him there. Goldstine took up the offer; Mauchly and Eckert left the academic world and formed the Electronic Control Company, which became a pioneering and successful commercial computer business, achieving particular success with their UNIVAC machine (Universal Automatic Computer). The EDVAC project staggered on without them, but by the time it was completed, in 1951, it was largely outdated, made obsolescent by other machines built using the architecture described in the “First Draft”âmachines conforming to what was now widely known as the von Neumann architecture, bottleneck and all.
The scene was now set for decades of development of those ideas, with valves giving way to transistors and chips, machines getting smaller, faster and more widely available, but with no essential change in the logical structure of computers. The Turing machine in your pocket owes as much to von Neumann (who always acknowledged his debt to “On
Computable Numbers”) as to Turing, but is no more advanced in terms of its logical structure than EDVAC itself.
There is no need to go into details about the development of faster and more powerful computers in the second half of the twentieth century. But I cannot resist mentioning one aspect of that story. In a book published as late as 1972, Goldstine commented: “It is however remarkable that Great Britain had such vitality that it could immediately after the war embark on so many well-conceived and well-executed projects in the computer field.”
15
Goldstine had been at the heart of the development of electronic computers, but the veil (actually, more like an iron curtain) of secrecy surrounding the British codebreaking activities was such that a quarter of a century after the developments described here, he was unaware of the existence of Colossus, and thought that the British had had to start from scratch on the basis of the “First Draft.” What they had really done, as we have seen, was even more remarkable.
SELF-REPLICATING ROBOTS
This isn't quite the end of the story of von Neumann and the machines. Like Turing, von Neumann was fascinated by the idea of artificial intelligence, although he had a different perspective on the rise of the robot. But unlike Turing, he lived long enough (just) to begin to flesh out those ideas.
There were two parts to von Neumann's later work. He was interested in the way that a complex system like the brain can operate effectively even though it is made up of fallible individual components, neurons. In the early computers (and many even today), if one component, such as a vacuum tube, failed the whole thing would grind to a halt. Yet in the human
brain it is possible for the “hardware” to suffer massive injuries and continue to function adequately, if not quite in the same way as before. And he was also interested in the problem of reproduction. Jumping off from Turing's idea of a computer that could mimic the behavior of any other computer,
16
he suggested, first, that there ought to be machines that could make copies of themselves and, secondly, that there could be a kind of universal replicating machine that could make copies of itself and also of any other machine. Both kinds of mimic, or copying machines, come under the general heading “automata.”.
Von Neumann's interests in working out how workable devices can be made from parts prone to malfunction, and in how complex a system would have to be in order to reproduce itself, both began to grow in 1947. This was partly because he was moving on from the development of computers like the one then being built at the IAS and other offspring of EDVAC, but also because he became involved in the pressing problem for the US air force in the early 1950s of how to develop missiles controlled by “automata” that would function perfectly, if only during the brief flight time of the rocket.
Von Neumann came up with two theoretical solutions to the problem of building near-infallible computing machines out of fallible, but reasonably accurate, components. The first is to set up each component in triplicate, with a means to compare automatically the outputs of the three subunits. If all three results, or any two results, agree, the computation proceeds to the next step, but if none of the subunits agrees with any other, the computation stops. This “majority voting” system works pretty well if the chance of any individual subunit making a mistake is small enough. It is even better if
the number of “voters” for each step in the calculation is increased to five, seven, or even more. But this has to be done for every step of the computation (not just every “neuron”), vastly (indeed, exponentially) increasing the amount of material required. The second technique involves replacing single lines for input and output by bundles containing large numbers of linesâso-called multiplexing. The data bit (say, 1) from the bundle would only be accepted if a certain proportion of the lines agreed that it was correct. This involves complications which I will not go into here;
17
the important point is that although neither technique is practicable, von Neumann proved that it is possible to build reliable machines, even brains, from unreliable components.
As early as 1948, von Neumann was lecturing on the problem of reproduction to a small group at Princeton.
18
The biological aspects of the puzzle were very much in the air at the time, with several teams of researchers looking for the mechanism by which genetic material is copied and passed from one generation to the next; it would not be until 1952 that the structure of DNA was determined. And it is worth remembering that von Neumann trained as a chemical engineer, so he understood the subtleties of complex chemical interactions. So it is no surprise that von Neumann says that the copying mechanism performs “the fundamental act of reproduction, the duplication of the genetic material.” The surprise is that he says this in the context of self-reproducing automata. It was around this time that he also surmised that up to a certain level of complexity automata would only be able to produce less complicated offspring, while above this level not only would they be able to reproduce themselves, but “syntheses of automata can proceed in such a manner that
each automaton will produce other automata which are more complex and of higher potentialities than itself.” He made the analogy with the evolution of living organisms, pointing out that “today's organisms are phylogenetically descended from others which were vastly simpler.” How did the process begin? Strikingly, von Neumann pointed out that even if the odds are against the existence of beings like ourselves, self-reproduction only has to happen once to produce (given time and evolution) an ecosystem as complex as that on Earth. “The operations of probability somehow leave a loophole at this point, and it is by the process of self-reproduction that they are pierced.”
By the early 1950s, von Neumann was working on the practicalities of a cellular model of automata. The basic idea is that an individual component, or cell, is surrounded by other cells, and interacts with its immediate neighbors. Those interactions, following certain rules, determine whether the cell reproduces, dies, or does nothing. At first, von Neumann thought three-dimensionally. Goldstine:
[He] bought the largest box of “Tinker Toys” to be had. I recall with glee his putting together these pieces to build up his cells. He discussed this work with [Julian] Bigelow and me, and we were able to indicate to him how the model could be achieved two-dimensionally. He thereupon gave his toys to Oskar Morgenstern's little boy Karl.
The two-dimensional version of von Neumann's model of cellular automata can be as simple as a sheet of graph paper on which squares are filled in with a pencil, or rubbed out, according to the rules of the model. But it is also now widely available in different forms that run on computers, and is
sometimes known as the “game of life.” With a few simple rules, groups of cells can be set up that perform various actions familiar in living organisms. Some just grow, spreading as more cells grow around the periphery; others pulsate, growing to a certain size, dying back and growing again; others move, as new cells are added on one side and other cells die on the opposite side; and some produce offspring, groups of cells that detach from the main body and set off on their own. In his discussion of such systems, von Neumann also mentioned the possibility of arbitrary changes in the functioning of a cell, equivalent to mutations in living organisms.
Von Neumann did not live long enough to develop these ideas fully. He died of cancer on February 28, 1957, at the age of fifty-three. But he left us with the idea of a “universal constructor,” a development of Turing's idea of a universal computerâa machine which could make copies of itself and of any other machine: that is, a self-reproducing robot. Such devices are now known as von Neumann machines, and they are relevant to one of the greatest puzzles of our, or any other timeâis there intelligent life elsewhere in the Universe? One form of a von Neumann machine would be a space-traveling robot that could move between the stars, stopping off whenever it found a planetary system to explore it and build copies of itself to speed up the exploration while sending other copies off to other stars. Starting with just one such machine, and traveling at speeds well within the speed of light limit, it would be possible to explore every planet in our home Milky Way galaxy in a few million years, an eyeblink as astronomical timescales go. The question posed by Enrico Fermi (Why, if there are alien civilizations out there, haven't they visited us?) then strikes with full force.
There's one other way to spread intelligence across the Universe, of which von Neumann was also aware. A universal constructor would operate by having blueprints, in the form of digitally coded instructions, which we might as well call programs, telling it how to build different kinds of machines. It would be far more efficient to spread this information across the Universe in the form of a radio signal traveling at the speed of light than in a von Neumann machine pottering along more slowly between the stars. If a civilization like ours detected such a signal, it would surely be copied and analyzed on the most advanced computers available, ideal hosts for the program to come alive and take over the operation of the computer. In mentioning this possibility, George Dyson makes an analogy with the way a virus takes over a host cell; he seems not to be aware of the entertaining variation on this theme discussed back in 1961 by astrophysicist Fred Hoyle in his fictional work
A for Andromeda
,
19
where the interstellar signal provides the instructions for making (or growing) a human body with the mind of the machine. Hoyle, though, was well aware of the work of Turing and von Neumann.