Authors: James Essinger
Jacquard’s Web
computer, operated by electronic impulses moving along conductive surfaces, or through vacuums, at the speed of light.
In
1948
Eckert and Mauchly went on to establish a computer manufacturing firm. A year later, they introduced the Binary Automatic Computer (BINAC) to the world. This stored information on magnetic tape rather than on punched cards. Their third computer, the Universal Automatic Computer (UNIVAC
1
), was even more sophisticated than the BINAC. Its reliability and sophistication won it many commercial customers. Some consider that the development of the UNIVAC started the global computer boom. Between
1948
and
1966
Eckert received
85
patents, mostly for electronic inventions.
The flow of the river produced other significant developments along the way. These all helped to contribute to the evolution of the digital computer, with electronic components eventually being seen as so superior to mechanical ones that eventually the idea of building computers from mechanical components was discarded altogether. Particularly significant developments in the history of computing during the extremely important years between the late
1930
s and the end of the war in
1945
were as follows:
1937
—the British mathematician Alan Turing published a paper entitled ‘On Computable Numbers, with an Application to the
Entscheidungsproblem
[Decision Problem]’. Turing was interested in investigating whether certain mathematical propositions could be shown to be definitively incapable of proof. His investigation consisted of postulating the idea of a special ‘universal mathematical machine’ that would be able to assess any proposition and make all the calculations necessary to decide whether the proposition was provable or not. His argument tended inescapably to the conclusion that mathematics will always contain some propositions that cannot be proven. His concept of a special universal mathematical machine was, however, regarded as more significant than the complex puzzle whose solution it was designed to 244
Weaving at the speed of light
facilitate. The machine was purely theoretical, but in essence Turing had laid the foundations for modern computer science.
The machine he proposed and specified very precisely in the paper had all the features of a modern computer: a finite program, a large data-storage capability, and a step-by-step mode of mathematical operation.
The ‘Turing Machine’, as it came to be called, is even now frequently used as a point of reference in basic discussions of automata theory. It also provided an inspiration for the next generation of digital computers that came into being in the
1940
s.
1938
—Konrad Zuse, a Berlin-based scientist, completed a prototype for a mechanical binary programmable calculator, that is, one which represented numbers in binary code. This enabled any number to be represented as a sequence of
0
s and
1
s, or as on/off switches.
1939
—On January
1
Hewlett-Packard (HP) was founded by William Hewlett and David Packard in a garage. Unsure whether to call their creation ‘Hewlett-Packard’ or ‘Packard-Hewlett’, the two founders decided the matter by the toss of a coin. Their corporation eventually became one of IBM’s main rivals and remains one of the world’s largest designers and manufacturers of computers and other high-technology equipment. The garage is now an HP museum.
1939
—John V. Atanasoff of Iowa State College (now Iowa State University) and his graduate student Clifford Berry completed a prototype
16
-bit adding machine. This was able to handle a calculation whose result involves any number up to
2
16–
1
or
65 535
. This was the first machine which calculated using vacuum tubes.
1939
—The outbreak of the war spurred many improvements in 245
Jacquard’s Web
technology and accelerated the impetus to make better calculation machines and devices that now started to be described more often as computers.
1941
—Atanasoff and Berry completed a special-purpose computer designed to solve simultaneous linear equations. This became known as the ‘Atanasoff–Berry Computer’ (‘ABC’). It had sixty
50
-bit memory units in the form of capacitors. Its secondary memory was based around punched cards, except that for speed of card production the holes were not punched into the cards but burned into them.
1943
—Computers built between this year and
1959
were often regarded as ‘first generation’. They were generally based on electronic valves used in conjunction with electric circuits, and with punched cards playing a key role in allowing the devices to be programmed and in facilitating memory storage.
1943
—Encryption experts, including the British mathematician Alan Turing, based at the secret Government Code and Cypher School (‘Station X’) at Bletchley Park, England, completed a device they affectionately named the ‘Heath Robinson’ after the British cartoonist famous for his drawings of ludicrously complex mechanisms for carrying out simple tasks. This was a special-purpose computer designed solely to break codes. It was essentially a logic-based processing device and worked using a combination of electronics and electromechanical relays. Apart from its importance in facilitating the cracking of enemy codes, it was also of importance as a forerunner of the ‘Colossus’ computer.
1943
—December. The earliest truly programmable electronic computer was first demonstrated in Britain. It contained
2400
vacuum tubes and was christened the ‘Colossus’. It was built by Dr Thomas Flowers of the Post Office Research Laboratories in 246
Weaving at the speed of light
London to crack the German ‘Lorenz’ (SZ
42
) cypher used by the
‘Enigma’ machines. Colossus, deployed at Bletchley Park, was the immediate successor to the Heath Robinson. It was able to decipher
5000
characters per second. While not a general-purpose computer, the Colossus represented an enormously important advance in computing science. Ten Colossus machines were eventually built, but by the end of the war had been destroyed by a short-sighted British Government which was afraid that the sophisticated technology they embodied might find its way into Soviet hands.
The introduction of electronic computers boosted processing speeds beyond the wildest dreams of even the most visionary of computer pioneers. ENIAC was able to perform
5000
operations per second compared to the three per second of the Harvard Mark
1
. Before long it was routine for computers to perform tens of thousands of operations per second.
The global computer revolution was already well advanced when it received another huge boost: the widespread introduction of the transistor. The transistor was actually invented in
1947
, but more than ten years of development work were needed to make it a viable alternative to the vacuum tube. When the transistor became commercially available in
1959
, it triggered another vast step forward for computer technology. The transistor made use of the properties of special materials, known as semiconductors, to create electronic switches that did everything a vacuum tube (valve), could do, but which used extremely small components that had the advantage of being solid and not requiring the creation of a vacuum. The transistor’s much greater efficiency and reliability, far lower power consumption, and much smaller size than the vacuum tube made it greatly superior to the tube and rendered valves largely obsolete, though they still have some uses in certain specialised electronic equipment, such as some TV cameras and oscilloscopes. Also, some hi-fi 247
Jacquard’s Web
connoisseurs prefer the sound quality of valve amplifiers to that of transistorised ones.
By using transistors and by taking advantage of important innovations in how memory capacity was built into computer hardware, computer manufacturers were able to produce more efficient, smaller, and faster digital systems. Some of these machines could process up to
100 000
instructions per second.
Yet even this speed appears snail-like compared with the computer processors of today. These processors, known formally as
microprocessors
or informally as ‘microchips’ or even ‘chips’, are basically fantastically miniaturized assemblies of tiny transistors.
The Intel Pentium chip, for example, started out containing close to one billion transistors and successive new models of Pentium have even exceeded this. Such chips are far too small to be built manually; they are in fact constructed in completely clean, dust-free environments using powerful microscopes, with the chip itself being etched out of silicon (hence the name ‘silicon chip’) by powerful rays of light. These chips allow computers to operate at prodigious speeds that show no sign of decreasing or even flatten-ing out.
Meanwhile, punched cards became the indispensable programming medium for almost every computer in the world.
Cards were cheap and convenient, and as long as they worked there was no reason to look for another solution. The cards of the
1940
s and
1950
s were thinner than before. As computers became more sensitive, cards were manufactured and punched with phenomenal accuracy. However, they were still recognizably the direct descendants of Jacquard’s cardboard cards for ‘programming’ a loom.
As processing speeds became faster and faster, experts feared that it would soon not be possible to use punched cards for programming electronic computers. Processing speeds were becoming so fast that even the fastest punched-card feed system, in which punched cards raced far too fast for the eye to see, could never have loaded the cards into the computer’s memory rapidly 248
Weaving at the speed of light
enough to keep pace. Since there was no alternative way to program the electronic computer, the problem actually delayed the evolution of new technology in the late
1940
s. During this time, programs had to be ‘loaded’ into computers by an operator who physically made changes in the wiring of the machines: a tedious, slow, and laborious task that significantly reduced the advantages offered by the new technology.
Fortunately for the computing industry, the problem was solved by the development of a program-reading technique known as
stored programming
. This allowed the computer’s memory to hold both the data
and
the program: that is, the raw information and the instructions for processing it. With stored programming, it did not matter that a punched-card system could never keep pace with the processing speed of an electronic computer. The program could be loaded in the form of punched cards, and stored throughout all the processing that followed.
Today, stored programming is something we take for granted.
All the programs and applications needed can be kept ‘in’ the computer on a permanent basis. It is now difficult to imagine a time when a computer had to read a pile of punched cards every time it was used.
The development of stored programming, and the tens of thousands of other computing breakthroughs that have made the computer what it is in our world today, were carried out by teams of computer engineers working for large computing corporations. Ever since the late
1950
s, this has tended to be the pattern for breakthroughs in computing: they have been the result of collaborative and joint effort by large teams composed of often anonymous people rather than by individual pioneers.
This is the river approaching the ocean.
In the
1960
s, a new generation of punched cards was born.
These cards featured small, usually rectangular perforations. They were read electronically when the perforations either admitted or blocked impulses of light that triggered light-sensitive cells.
The figure overleaf shows what a typical punched card from this 249
Jacquard’s Web
period looked like. The cards generally continued to incorporate Hollerith’s ‘missing corner’ feature, guarding against them being inserted the wrong way round. Typically, programs would consist of many hundreds or even thousands of punched cards, each one containing one line of the complete program. Many computer users now in their fifties and sixties have nostalgic memories of loading punched cards into computers each time the program was run.
The principle was essentially the same as for the Jacquard loom, where one punched card was needed for each pick of the warp thread. Punched cards continued to be the main medium for loading programs into computers and for inputting data until the mid-
1970
s, when they were gradually replaced by magnetic tape and magnetic (or ‘floppy’) disks. Yet it was only in the mid-
1980
s that punched cards started to become obsolete in the computer industry. IBM manufactured the last one in
1984
, a date that is surprisingly recent, considering how powerful and advanced computers had become by then.