War: What is it good for? (64 page)

BOOK: War: What is it good for?
12.73Mb size Format: txt, pdf, ePub
ads

This is a terrifying scenario. But if the 2010s–50s do rerun the script of the 1870s–1910s, with the globocop weakening, unknown unknowns multiplying, and weapons growing ever more destructive, it will become increasingly plausible.

The New England saying, then, may be true: perhaps we really can't get there from here.

Unless, that is, “there” isn't where we think it is.

Come Together

The secret of strategy is knowing where you want to go, because only then can you work out how to get there. For more than two hundred years, campaigners for peace have been imagining “there”—a world without war—in much the way that Kant did, as something that can be brought into being by a conscious decision to renounce violence. Margaret Mead insisted that war is something we have invented, and therefore something we can uninvent. The authors of “War” suggested that standing up and shouting that war is good for absolutely nothing would end it. Political scientists tend to be less idealistic, but many of them also argue that conscious choice (this time, to build better, more democratic, and more inclusive institutions) will get us there from here.

The long-term history I have traced in this book, however, points in a very different direction. We kill because the grim logic of the game of death rewards it. On the whole, the choices we make do not change the game's payoffs; rather, the game's payoffs change the choices we make. That is why we cannot just decide to end war.

But long-term history also suggests a second, and more upbeat, conclusion. We are not trapped in a Red Queen Effect, doomed to rerun the self-defeating tragedy of globocops that create their own enemies until we destroy
civilization altogether. Far from keeping us in the same place, all the running we have done in the last ten thousand years has transformed our societies, changing the payoffs in the game; and in the next few decades the payoffs look likely to change so much that the game of death will turn into something entirely new. We are beginning to play the endgame of death.

To explain what I mean by this rather cryptic statement, I want to step back from the horrors of war for a moment to take up some of the arguments in my two most recent books,
Why the West Rules—for Now
and
The Measure of Civilization
. As I mentioned at the end of
Chapter 2
, in these publications I presented what I called an index of social development, which measures how successful different societies have been at getting what they wanted from the world across the fifteen thousand years since the last ice age. The index assigned social development scores on a scale from 0 points to 1,000, the latter being the highest score possible under the conditions prevailing in the year
A.D.
2000, where the index ended.

Armed with this index, I asked—partly tongue in cheek and partly not—what would happen if we projected the scores forward. As with any prediction, the results depend on what assumptions we make, so I took a deliberately conservative starting point, asking how the future will shape up if development continues increasing in the twenty-first century just at the pace it did in the twentieth. The result, even with such a restrictive assumption, was startling: by 2100, the development score will have leaped to 5,000 points. Getting from a caveman painting bison at Lascaux to you reading this book required development to rise by 900 points; getting to 2100 will see it increase by another
4,000
points.

“Mind-boggling” is the only word for such a prediction—literally, because one of the major implications of such soaring development is that the human mind itself will be transformed during the century to come. Computerization is not just changing war: it is changing everything, including the animals that we are. Biological evolution gave us brains so powerful that we could invent cultural evolution, but cultural evolution has now reached the point that the machines we are building are beginning to feed back into our biological evolution—with results that will change the game of death into an
end
game of death, with the potential to make violence irrelevant.

It is hard to imagine anything that could be more important for the future of war, but in conversations over the last year or two I have noticed
a deep disconnect between how technologists and security analysts see the world. Among technologists, there seems to be no such thing as over-optimism; everything is possible, and it will all turn out better than we expect. In the world of international security, however, the bad is always about to get worse, and things are always scarier than we had realized. Security analysts tend to dismiss technologists as dreamers, so lost in utopian fantasies that they cannot see that strategic realities will always override technobabble, and technologists often deride the security crowd as dinosaurs, so stuck in the old paradigm that they cannot see that computerization will sweep their worries away.

There are exceptions, of course. The National Intelligence Council's reports try to bring both points of view together, as does the recent book
The New Digital Age,
co-authored by the technologist Eric Schmidt and the security expert Jared Cohen. Trying to build on their examples—schizophrenic as the experience can be—I devote the rest of this section to the technologists' projections, turning to the reality check of security concerns in the section that follows. The combination produces a vision of the near future that is both uplifting and alarming.

The technologists' starting point is an obvious fact: computers powerful enough to fly fighter jets in real time will be powerful enough to do a lot more too. Just how much more, no one can say for sure, but hundreds of futurists have made their best guesses anyway. Not surprisingly, no two agree on very much, and if there is anything we can be certain of, it is that these visions are at least as full of errors as the century-old science fiction of Jules Verne and H. G. Wells. But by the same token, when taken in bulk rather than tested one speculation at a time, today's futurists also resemble those of late-Victorian times in recognizing a set of broad trends transforming the world—and when it came to broad trends, Verne and Wells were arguably right more often than they were wrong.

The biggest area of agreement among contemporary futurists (and the mainstay of the
Matrix
movies) is that we are merging with our machines. This is an easy prediction to make, given that we have been doing it since the first cardiac pacemaker was fitted in 1958 (or, in a milder sense, since the first false teeth and wooden legs). The twenty-first-century version, however, is much grander. Not only are we merging with our machines; through our machines, we are also merging with each other.

The idea behind this argument is very simple. Inside your brain, that 2.7 pounds of magic that I said so much about in
Chapter 6
, 10,000 trillion
electrical signals flash back and forth every second between some twenty-two billion neurons. These signals make you who you are, with your unique way of thinking and the roughly ten trillion stored pieces of information that constitute your memory. No machine yet comes close to matching this miracle of nature—although the machines are gaining fast.

For half a century, the power, speed, and cost-effectiveness of computers have been doubling every year or so. In 1965, a dollar's worth of computing on a new, superefficient IBM 1130 bought one one-thousandth of a calculation per second. By 2010, the same dollar/second bought more than ten billion calculations, and by the time this book appears in 2014, the relentless doubling will have boosted that above a hundred billion. Cheap laptops can do more calculations, and faster, than the giant mainframes of fifty years ago. We can even make computers just a few molecules across, so small that they can be inserted into our veins to reprogram cells to fight cancer. Just a century ago, it would all have seemed like sorcery.

We only need to extend the trend line out as far as 2029, observes Ray Kurzweil (the best known of the technological futurists, and now director of engineering at Google too), to get scanners powerful enough to map brains neuron by neuron and computers powerful enough to run the programs in real time. At that point, Kurzweil claims, there will effectively be two of you: one the old, unimproved, biological version, decaying over time, and the other a new, unchanging, machine-based alternative. Better still, says Kurzweil, the machine-based minds will be able to share information as easily as we now swap files between computers, and by 2045, if the trends hold, there will be supercomputers powerful enough to host scans of all eight billion minds in the world. Carbon- and silicon-based intelligence will come together in a single global consciousness, with thinking power dwarfing anything the world has ever seen. Kurzweil calls this moment the Singularity—“a future period during which the pace of technological change will be so rapid, its impact so deep … that technology appears to be expanding at infinite speed.”

These are extraordinary claims. Naturally, there are plenty of naysayers, including some leading scientists as well as rival futurists. They are often blunt; the Singularity is just “the Rapture for Nerds,” says the science-fiction author Ken MacLeod, while the influential technology critic Evgeny Morozov thinks that all this “digito-futuristic nonsense” is nothing more than a “Cyber-Whig theory of history.” (I am not entirely sure what that means, but it is clearly not a compliment.) One neuroscientist,
speaking at a conference in 2012, was even more direct. “It's crap,” he said.

Other critics, however, prefer to follow the lead of the famous physicist Niels Bohr, who once told a colleague, “We are all agreed that your theory is crazy. The question that divides us is whether it is crazy enough to have a chance of being correct.” Perhaps, some think, Kurzweil is not being crazy enough. A 2012 survey of crystal-ball gazers found that the median date at which they anticipated a technological Singularity was 2040, five years ahead of Kurzweil's projection; while Henry Markram, the neuroscientist who directs the Human Brain Project, even expects to get there (with the aid of a billion-euro grant from the European Union) by 2020.

But when we turn from soothsaying to what is actually happening in laboratories, we discover—perhaps unsurprisingly—that while no one can predict the detailed results, the broad trend does keep moving toward the computerization of everything. I touched on some of this science in my book
Why the West Rules—for Now,
so here I can be brief, but I do want to note a couple of remarkable advances in what neuroscientists call brain-to-brain interfacing (in plain English, telepathy over the Internet) made since that book appeared in 2010.

The first requirement for merging minds through machines is machines that can read the electrical signals inside our skulls, and in 2011 neuroscientists at the University of California, Berkeley, took a big step in this direction. After measuring the blood flow through volunteers' visual cortices as they watched film clips, they used computer algorithms to convert the data back into images. The results were crude, grainy, and rather confusing, but Jack Gallant, the neuroscientist leading the project, is surely right to say, “We are opening a window into the movies in our minds.”

Just a few months later, another Berkeley team recorded the electrical activity in subjects' brains as they listened to human speech, and then had computers translate these signals back into words. Both experiments were clumsy; the first required volunteers to lie still for hours, strapped into functional magnetic resonance imaging scanners, while the second could only be done on patients undergoing brain surgery, who had had big slices of their skulls removed and electrodes placed directly inside. “There's a long way to go before you get to proper mind-reading,” Jan Schnupp, a professor of neuroscience at Oxford University, concluded in his assessment of the research, but, he added, “it's a question of when rather than if … It is conceivable that in the next ten years this could happen.”

The second requirement for Internet-enabled telepathy is a way to transmit electrical signals from one brain to another, and in 2012 Miguel Nicolelis, a neuroscientist at Duke University, showed how this might be done, by getting rats in his native Brazil to control the bodies of rats in North Carolina. The South American rodents had been taught that when a light flashed, they would get snacks if they pressed a lever. Electrodes attached to their heads picked up this brain activity and sent it over the Internet to electrodes on the skulls of North American rodents—who, without the benefit of training or flashing lights, pressed the same lever and got a snack 70 percent of the time.

Seventy percent is far from perfect, rats' brains are much simpler than ours, and pressing a lever is not a very challenging task. But despite the myriad technical problems, one thing seems certain. Brain-to-brain interfacing is not going to stop at rats moving one another's paws over the Internet. It may well develop in ways entirely different from Kurzweil's vision—which Nicolelis calls “a bunch of hot air”—but it will continue to develop nonetheless. (Nicolelis, in fact, expects us to get to much the same place as does Kurzweil, but from the opposite direction: instead of uploading brain scans onto computers, he says, we will implant tiny computers into our brains.)

Since the experts cannot agree on the details, there is little to gain from arbitrarily picking one prophecy and running with it. However, there is even less to gain from pretending that nothing is happening at all. We might do best to heed the sage words of Richard Smalley, a Nobel Prize–winning chemist who is often called the father of nanotechnology. Smalley's Law (as I like to call it) tells us that “when a scientist says something is possible, they're probably underestimating how long it will take. But if they say it's impossible, they're probably wrong.” However exactly it works, and whether we like the idea or not, brain-to-brain interfacing—as Lieutenant Colonel Thomas Adams, quoted a few pages ago, said of robotics on the battlefield—is taking us to a place where we may not want to go but probably are unable to avoid.

BOOK: War: What is it good for?
12.73Mb size Format: txt, pdf, ePub
ads

Other books

Private 8 - Revelation by Private 8 Revelation
Honorary Surgeon by Marjorie Moore
Bleed a River Deep by Brian McGilloway
Defiant Dragon by Kassanna
The Good Guy by Dean Koontz
Dirty Work by Larry Brown
Remember Love by Nelson, Jessica


readsbookonline.com Copyright 2016 - 2024