Our Final Invention: Artificial Intelligence and the End of the Human Era Hardcover (13 page)

It was because of his chess playing that a year after World War II began, Britain’s reigning chess champion, Hugh Alexander, recruited Good to join Hut 18 at Bletchley Park. Hut 18 was where the decoders worked. They broke codes used by all the Axis powers—Germany, Japan, and Italy—to communicate military commands, but with special emphasis on Germany. German U-boats were sinking Allied shipping at a crippling rate—in just the first half of 1942, U-boats would sink some five hundred Allied ships. Prime Minister Winston Churchill feared his island nation would be starved into defeat.

German messages were sent by radio waves, and the English intercepted them with listening towers. From the start of the war Germany created the messages with a machine called the Enigma. Widely distributed within the German armed forces, the Enigma was about the size and shape of an old-fashioned manual typewriter. Each key displayed a letter, and was connected to a wire. The wire would make contact with another wire that was connected to a different letter. That letter would be the substitute for the one represented on the key. All the wires were mounted on rotors to enable any wire in the alphabet to touch any other wire. The basic Enigmas had three wheels, so that each wheel could perform substitutions for the substitutions made by the prior wheel. For an alphabet of twenty-six letters, 403,291,461,126,605,635,584,000,000 such substitutions were possible. The wheels, or settings, changed almost daily.

When one German sent others an Enigma-encoded message, the recipients would use their own Enigmas to decode it, provided they knew the sender’s settings.

Fortunately Bletchley Park had a secret weapon of its own—Alan Turing. Before the war, Turing had studied mathematics and encryption at Cambridge and Princeton. He had imagined an “automatic machine,” now known as a Turing machine. The automatic machine laid out the basic principles of computation itself.

The Church-Turing hypothesis, which combined Turing’s work with that of his Princeton professor, mathematician Alonso Church, really puts the starch in the pants of the study of artificial intelligence. It proposes that anything that can be computed by an algorithm, or program, can be computed by a Turing machine. Therefore, if brain processes can be expressed as a series of instructions—an algorithm—then a computer can process information the same way. In other words, unless there’s something mystical or magical about human thinking, intelligence can be achieved by a computer. A lot of AGI researchers have pinned their hopes to the Church-Turing hypothesis.

The war gave Turing a crash course in everything he’d been thinking about before the war, and lots he
hadn’t
been thinking about, like Nazis and submarines. At the war’s peak, Bletchley Park personnel decoded some four thousand intercepted messages per day. Cracking them all by hand became impossible. It was a job meant for a machine. And it was Turing’s critical insight that it was easier to calculate what the settings on the Enigma
were not
, rather than what they
were
.

The decoders had data to work with—intercepted messages that had been “broken” by hand, or by electrical decoding machines, called Bombes. They called these messages “kisses.” Like I. J. Good, Turing was a devoted Bayesian, at a time when the statistical method was seen as a kind of witchcraft. The heart of the method, the Bayes’ theorem, describes how to use data to infer probabilities of unknown events, in this case, the Enigma’s settings. The “kisses” were the data that allowed the decoders to determine which settings were highly improbable, so that the code-breaking efforts could be focused more efficiently. Of course, the codes changed almost daily, so work at Bletchley Park was a constant race.

Turing and his colleagues designed a series of electronic machines that would evaluate and eliminate possible Enigma settings. These early computers culminated in a series of machines all named “Colossus.” Colossus could read five thousand characters per second from paper tape that traveled through it at twenty-seven miles an hour. It contained 1,500 vacuum tubes, and filled a room. One of its main users, and creator of half the theory behind the Colossus, was Turing’s chief statistician for much of the war: Irving John Good.

The heroes of Bletchley Park probably shortened World War II by between two and four years, saving an incalculable number of lives. But there were no parades for the secret warriors. Churchill ordered that all Bletchley’s encryption machines be broken into pieces no bigger than a fist, so their awesome decoding power couldn’t be turned against Great Britain. The code breakers were sworn to secrecy for
thirty years
. Turing and Good were recruited to join the staff at the University of Manchester, where their former section head, Max Newman, intended to develop a general purpose computer. Turing was working on a computer design at the National Physical Laboratory when his life turned upside down. A man with whom he’d had a casual affair burgled his house. When he reported the crime he admitted the sexual relationship to the police. He was charged with gross indecency and stripped of his security clearance.

At Bletchley Turing and Good had discussed futuristic ideas like computers, intelligent machines, and an “automatic” chess player. Turing and Good bonded over games of chess, which Good won. In return, Turing taught him Go, an Asian strategy game, which he also won. A world-class long-distance runner, Turing devised a form of chess that leveled the playing field against better players. After every move each player had to run around the garden. He got
two
moves if he made it back to the table before his opponent had moved.

Turing’s 1952 conviction for indecency surprised Good, who didn’t know Turing was homosexual. Turing was forced to choose between prison and chemical castration. He opted for the latter, submitting to regular shots of estrogen. In 1954 he ate an apple laced with cyanide. A baseless but intriguing rumor claims Apple Computer derived its logo from this tragedy.

After the ban on secrets had run out, Good was one of the first to speak out against the government’s treatment of his friend and war hero.

“I won’t say that what Turing did made us win the war,” Good said. “But I daresay we might have lost it without him.” In 1967 Good left a position at Oxford University to accept the job at Virginia Tech in Blacksburg, Virginia. He was fifty-two. For the rest of his life he’d return to Great Britain just once more.

He was accompanied on that 1983 trip by a tall, beautiful twenty-five-year-old assistant, a blond Tennessean named Leslie Pendleton. Good met Pendleton in 1980 after he’d gone through ten secretaries in thirteen years. A Tech graduate herself, Pendleton stuck where others had not, unbowed by Good’s grating perfectionism. The first time she mailed one of his papers to a mathematics journal, she told me, “He supervised how I put the paper and cover letter into the envelope. He supervised how I sealed the envelope—he didn’t like spit and made me use a sponge. He watched me put on the stamp. He was right there when I got back from the mail room to make sure mailing it had gone okay, like I could’ve been kidnapped or something. He was a bizarre little man.”

Good wanted to marry Pendleton. However, for starters, she could not see beyond their forty year age difference. Yet the English oddball and the Tennessee beauty forged a bond she still finds hard to describe. For thirty years she accompanied him on vacations, looked after all his paperwork and subscriptions, and guided his affairs into his retirement and through his declining health. When we met, she took me to visit his house in Blacksburg, a brick rambler overlooking U.S. Route 460, which had been a two-lane country road when Good moved in.

Leslie Pendleton is statuesque, now in her mid-fifties, a Ph.D. and mother of two adults. She’s a Virginia Tech professor and administrator, a master of schedules, classrooms, and professors’ quirks, for which she had good training. And even though she married a man her own age, and raised a family, many in the community questioned her relationship with Good. They finally got their answer in 2009 at his funeral, where Pendleton delivered the eulogy. No, they had never been romantically involved, she said, but yes, they had been devoted to each other. Good hadn’t found romance with Pendleton, but he had found a best friend of thirty years, and a stalwart guardian of his estate and memory.

In Good’s yard, accompanied by the insect whine of Route 460, I asked Pendleton if the code breaker ever discussed the intelligence explosion, and if a computer could save the world again, as it had done in his youth. She thought for a moment, trying to retrieve a distant memory. Then she said, surprisingly, that Good had changed his mind about the intelligence explosion. She’d have to look through his papers before she could tell me more.

That evening, at an Outback Steakhouse where Good and his friend Golde Holtzman had maintained a standing Saturday night date, Holtzman told me that three things stirred Good’s feelings—World War II, the Holocaust, and Turing’s shameful fate. This played into the link in my mind between Good’s war work and what he wrote in his paper, “Speculations Concerning the First Ultraintelligent Machine.” Good and his colleagues had confronted a mortal threat, and were helped in defeating it by computational machines. If a machine could save the world in the 1940s, perhaps a superintelligent one could solve mankind’s problems in the 1960s. And if the machine could
learn,
its intelligence would explode. Mankind would have to adjust to sharing the planet with superintelligent machines. In “Speculations” he wrote:

The machines will create social problems, but they might also be able to solve them in addition to those that have been created by microbes and men. Such machines will be feared and respected, and perhaps even loved. These remarks might appear fanciful to some readers, but to the writer they seem very real and urgent, and worthy of emphasis outside of science fiction.

There is no straight conceptual line connecting Bletchley Park and the intelligence explosion, but a winding one with many influences. In a 1996 interview with statistician and former pupil David L. Banks, Good revealed that he was moved to write his essay after delving into artificial neural networks. Called ANNs, they are a computational model that mimics the activity of the human brain’s networks of neurons. Upon stimulation, neurons in the brain fire, sending on a signal to other neurons. That signal can encode a memory or lead to an action, or both. Good had read a 1949 book by psychologist Donald Hebb that proposed that the behavior of neurons could be mathematically simulated.

A computational “neuron” would be connected to other computational neurons. Each connection would have numeric “weights,” according to their strength. Machine learning would occur when two neurons were simultaneously activated, increasing the “weight” of their connection. “Cells that fire together, wire together,” became the slogan for Hebb’s theory. In 1957, MIT (Massachusetts Institute of Technology) psychologist Frank Rosenblatt created a neuronal network based on Hebb’s work, which he called a “Perceptron.” Built on a room-sized IBM computer, the Perceptron “saw” and learned simple visual patterns. In 1960 IBM asked I. J. Good to evaluate the Perceptron. “I thought neural networks, with their ultraparallel working, were as likely as programming to lead to an intelligent machine,” Good said. The first talks on which Good based “Speculations Concerning the First Ultraintelligent Machine” came out two years later. The intelligence explosion was born.

Good was more right than he knew about ANNs. Today, artificial neural networks are an artificial intelligence heavyweight, involved in applications ranging from speech and handwriting recognition to financial modeling, credit approval, and robot control. ANNs excel at high level, fast pattern recognition, which these jobs require. Most also involve “training” the neural network on massive amounts of data (called training sets) so that the network can “learn” patterns. Later it can recognize similar patterns in new data. Analysts can ask, based on last month’s data, what the stock market will look like
next
week. Or, how likely is someone to default on a mortgage, given a three year history of income, expenses, and credit data?

Like genetic algorithms, ANNs are “black box” systems. That is, the inputs—the network weights and neuron activations—are transparent. And what they output is understandable. But what happens in between? Nobody understands. The output of “black box” artificial intelligence tools can’t ever be predicted. So they can never be truly and verifiably “safe.”

*   *   *

But they’ll likely play a big role in AGI systems. Many researchers today believe pattern recognition—what Rosenblatt’s Perceptron aimed for—is our brain’s chief tool for intelligence. The inventor of the Palm Pilot and Handspring Treo, Jeff Hawkins, pioneered handwriting recognition with ANNs. His company, Numenta, aims to crack AGI with pattern recognition technology. Dileep George, once Numenta’s Chief Technology Officer, now heads up Vicarious Systems, whose corporate ambition is stated in their slogan: We’re Building Software that Thinks and Learns Like a Human.

Neuroscientist, cognitive scientist, and biomedical engineer Steven Grossberg has come up with a model based on ANNs that some in the field believe could really lead to AGI, and perhaps the “ultraintelligence” whose potential Good saw in neural networks. Broadly speaking, Grossberg first determines the roles played in cognition by different regions of the cerebral cortex. That’s where information is processed, and thought produced. Then he creates ANNs to model each region. He’s had success in motion and speech processing, shape detection, and other complex tasks. Now he’s exploring how to computationally link his modules.

Machine-learning might have been a new concept to Good, but he would have encountered machine-learning algorithms in evaluating the Perceptron for IBM. Then, the tantalizing possibility of machines learning as humans do suggested to Good consequences others had not yet imagined. If a machine could make itself smarter, then the improved machine would be even better at making itself smarter, and so on.

Other books

Obey Me by Paige Cuccaro
The Broken Places by Ace Atkins
Rivals in Paradise by Gwyneth Bolton
The Love of a Mate by Kim Dare
Shock Point by April Henry
Year of Being Single by Collins, Fiona
Siren Song by A C Warneke
The Belt of Gold by Cecelia Holland


readsbookonline.com Copyright 2016 - 2024