The Glass Cage: Automation and Us (22 page)

The more we learn about ourselves, the more we realize how misleading that particular “reality” is. One of the most interesting and illuminating areas of study in contemporary psychology and neuroscience involves what’s called
embodied cognition
. Today’s scientists and scholars are confirming John Dewey’s insight of a century ago: Not only are brain and body composed of the same matter, but their workings are interwoven to a degree far beyond what we assume. The biological processes that constitute “thinking” emerge not just from neural computations in the skull but from the actions and sensory perceptions of the entire body. “For example,” explains Andy Clark, a philosopher of mind at the University of Edinburgh who has written widely on embodied cognition, “there’s good evidence that the physical gestures we make while we speak actually reduce the ongoing cognitive load on the brain, and that the biomechanics of the muscle and tendon systems of the legs hugely simplify the problem of controlled walking.”
53
The retina, recent research shows, isn’t a passive sensor sending raw data to the brain, as was once assumed; it actively shapes what we see. The eye has smarts of its own.
54
Even our conceptual musings appear to involve the body’s systems for sensing and moving. When we think abstractly or metaphorically about objects or phenomena in the world—tree branches, say, or gusts of wind—we mentally reenact, or simulate, our physical experience of the things.
55
“For creatures like us,” Clark argues, “body, world, and action” are “co-architects of that elusive thing that we call the mind.”
56

How cognitive functions are distributed among the brain, the sensory organs, and the rest of the body is still being studied and debated, and some of the more extravagant claims made by embodied-cognition advocates, such as the suggestion that the individual mind extends outside the body into the surrounding environment, remain controversial. What is clear is that we can no more separate our thinking from our physical being than we can separate our physical being from the world that spawned it. “Nothing about human experience remains untouched by human embodiment,” writes the philosopher Shaun Gallagher: “from the basic perceptual and emotional processes that are already at work in infancy, to a sophisticated interaction with other people; from the acquisition and creative use of language, to higher cognitive faculties involving judgment and metaphor; from the exercise of free will in intentional action, to the creation of cultural artifacts that provide for further human affordances.”
57

The idea of embodied cognition helps explain, as Gallagher suggests, the human race’s prodigious facility for technology. Tuned to the surrounding environment, our bodies and brains are quick to bring tools and other artifacts into our thought processes—to treat things, neurologically, as parts of our selves. If you walk with a cane or work with a hammer or fight with a sword, your brain will incorporate the tool into its neuronal map of your body. The nervous system’s blending of body and object is not unique to humans. Monkeys use sticks to dig ants and termites from the ground, elephants use leafy branches to swat away biting flies, dolphins use bits of sponge to protect themselves from scrapes while digging for food on the ocean floor. But
Homo sapiens
’s superior aptitude for conscious reasoning and planning enables us to design ingenious tools and instruments for all sorts of purposes, extending our mental as well as our physical capacities. We have an ancient tendency toward what Clark terms “cognitive hybridization,” the mixing of the biological and the technological, the internal and the external.
58

The ease with which we make technology part of our selves can also lead us astray. We can grant power to our tools in ways that may not be in our best interest. One of the great ironies of our time is that even as scientists discover more about the essential roles that physical action and sensory perception play in the development of our thoughts, memories, and skills, we’re spending less time acting in the world and more time living and working through the abstract medium of the computer screen. We’re disembodying ourselves, imposing sensory constraints on our existence. With the general-purpose computer, we’ve managed, perversely enough, to devise a tool that steals from us the bodily joy of working with tools.

Our belief, intuitive but erroneous, that our intellect operates in isolation from our body leads us to discount the importance of involving ourselves with the world of things. That in turn makes it easy to assume that a computer—which to all appearances is an artificial brain, a “thinking machine”—is a sufficient and indeed superior tool for performing the work of the mind. Google’s Michael Jones takes it as a given that “people are about 20 IQ points smarter now,” thanks to his company’s mapping tools and other online services.
59
Tricked by our own brains, we assume that we sacrifice nothing, or at least nothing essential, by relying on software scripts to travel from place to place or to design buildings or to engage in other sorts of thoughtful and inventive work. Worse yet, we remain oblivious to the fact that there are alternatives. We ignore the ways that software programs and automated systems might be reconfigured so as not to weaken our grasp on the world but to strengthen it. For, as human-factors researchers and other experts on automation have found, there are ways to break the glass cage without losing the many benefits computers grant us.

AUTOMATION FOR THE PEOPLE

W
HO NEEDS HUMANS
, anyway?

That question, in one rhetorical form or another, comes up frequently in discussions of automation. If computers are advancing so rapidly, and if people by comparison seem slow, clumsy, and error prone, why not build immaculately self-contained systems that perform flawlessly without any human oversight or intervention? Why not take the human factor out of the equation altogether? “We need to let robots take over,” declared the technology theorist Kevin Kelly in a 2013
Wired
cover story. He pointed to aviation as an example: “A computerized brain known as the autopilot can fly a 787 jet unaided, but irrationally we place human pilots in the cockpit to babysit the autopilot ‘just in case.’ ”
1
The news that a person was driving the Google car that crashed in 2011 prompted a writer at a prominent technology blog to exclaim, “More robo-drivers!”
2
Commenting on the struggles of Chicago’s public schools,
Wall Street Journal
writer Andy Kessler remarked, only half-jokingly, “Why not forget the teachers and issue all 404,151 students an iPad or Android tablet?”
3
In a 2012 essay, the respected Silicon Valley venture capitalist Vinod Khosla suggested that health care will be much improved when medical software—which he dubs “Doctor Algorithm”—goes from assisting primary-care physicians in making diagnoses to replacing the doctors entirely. “Eventually,” he wrote, “we won’t need the average doctor.”
4
The cure for imperfect automation is total automation.

That’s a seductive idea, but it’s simplistic. Machines share the fallibility of their makers. Sooner or later, even the most advanced technology will break down, misfire, or, in the case of a computerized system, encounter a cluster of circumstances that its designers and programmers never anticipated and that leave its algorithms baffled. In early 2009, just a few weeks before the Continental Connection crash in Buffalo, a US Airways Airbus A320 lost all engine power after hitting a flock of Canada geese on takeoff from LaGuardia Airport in New York. Acting quickly and coolly, Captain Chesley Sullenberger and his first officer, Jeffrey Skiles, managed, in three harrowing minutes, to ditch the crippled jet safely in the Hudson River. All passengers and crew were evacuated. If the pilots hadn’t been there to “babysit” the A320, a craft with state-of-the-art automation, the jet would have crashed and everyone on board would almost certainly have perished. For a passenger jet to have all its engines fail is rare. But it’s not rare for pilots to rescue planes from mechanical malfunctions, autopilot glitches, rough weather, and other unexpected events. “Again and again,” Germany’s
Der Spiegel
reported in a 2009 feature on airline safety, the pilots of fly-by-wire planes “run into new, nasty surprises that none of the engineers had predicted.”
5

The same is true elsewhere. The mishap that occurred while a person was driving Google’s Prius was widely reported in the press; what we don’t hear much about are all the times the backup drivers in Google cars, and other automated test vehicles, have to take the wheel to perform maneuvers the computers can’t handle. Google requires that people drive its cars manually when on most urban and residential streets, and any employee who wants to operate one of the vehicles has to complete rigorous training in emergency driving techniques.
6
Driverless cars aren’t quite as driverless as they seem.

In medicine, caregivers often have to overrule misguided instructions or suggestions offered by clinical computers. Hospitals have found that while computerized drug-ordering systems alleviate some common errors in dispensing medication, they introduce new problems. A 2011 study at one hospital revealed that the incidence of duplicated medication orders actually increased after drug ordering was automated.
7
Diagnostic software is also far from perfect. Doctor Algorithm may well give you the right diagnosis and treatment most of the time, but if your particular set of symptoms doesn’t fit the probability profile, you’re going to be glad that Doctor Human was there in the examination room to review and overrule the computer’s calculations.

As automation technologies become more complicated and more interconnected, with a welter of links and dependencies among software instructions, databases, network protocols, sensors, and mechanical parts, the potential sources of failure multiply. Systems become susceptible to what scientists call “cascading failures,” in which a small malfunction in one component sets off a far-flung and catastrophic chain of breakdowns. Ours is a world of “interdependent networks,” a group of physicists reported in a 2010
Nature
article. “Diverse infrastructures such as water supply, transportation, fuel and power stations are coupled together” through electronic and other links, which ends up making all of them “extremely sensitive to random failure.” That’s true even when the connections are limited to exchanges of data.
8

Vulnerabilities become harder to discern too. With the industrial machinery of the past, explains MIT computer scientist Nancy Leveson in her book
Engineering a Safer World
, “interactions among components could be thoroughly planned, understood, anticipated, and guarded against,” and the overall design of a system could be tested exhaustively before it was put into everyday use. “Modern, high-tech systems no longer have these properties.” They’re less “intellectually manageable” than were their nuts-and-bolts predecessors.
9
All the parts may work flawlessly, but a small error or oversight in system design—a glitch that might be buried in hundreds of thousands of lines of software code—can still cause a major accident.

The dangers are compounded by the incredible speed at which computers can make decisions and trigger actions. That was demonstrated over the course of a hair-raising hour on the morning of August 1, 2012, when Wall Street’s largest trading firm, Knight Capital Group, rolled out a new automated program for buying and selling shares. The cutting-edge software had a bug that went undetected during testing. The program immediately flooded exchanges with unauthorized and irrational orders, trading $2.6 million worth of stocks every second. In the forty-five minutes that passed before Knight’s mathematicians and computer scientists were able to track the problem to its source and shut the offending program down, the software racked up $7 billion in errant trades. The company ended up losing almost half a billion dollars, putting it on the verge of bankruptcy. Within a week, a consortium of other Wall Street firms bailed Knight out to avoid yet another disaster in the financial industry.

Technology improves, of course, and bugs get fixed. Flawlessness, though, remains an ideal that can never be achieved. Even if a perfect automated system could be designed and built, it would still operate in an imperfect world. Autonomous cars don’t drive the streets of utopia. Robots don’t ply their trades in Elysian factories. Geese flock. Lightning strikes. The conviction that we can build an entirely self-sufficient, entirely reliable automated system is itself a manifestation of automation bias.

Unfortunately, that conviction is common not only among technology pundits but also among engineers and software programmers—the very people who design the systems. In a classic 1983 article in the journal
Automatica
, Lisanne Bainbridge, an engineering psychologist at University College London, described a conundrum that lies at the core of computer automation. Because designers often assume that human beings are “unreliable and inefficient,” at least when compared to a computer, they strive to give them as small a role as possible in the operation of systems. People end up functioning as mere monitors, passive watchers of screens.
10
That’s a job that humans, with our notoriously wandering minds, are particularly bad at. Research on vigilance, dating back to studies of British radar operators watching for German submarines during World War II, shows that even highly motivated people can’t keep their attention focused on a display of relatively stable information for more than about half an hour.
11
They get bored; they daydream; their concentration drifts. “This means,” Bainbridge wrote, “that it is humanly impossible to carry out the basic function of monitoring for unlikely abnormalities.”
12

Other books

A Mate's Escape by Hazel Gower
Sweet's Journey by Erin Hunter
Goddess of the Rose by P. C. Cast
All That Is Red by Anna Caltabiano
The Oyster Catchers by Iris Gower
Game Changer by Margaret Peterson Haddix
The Laws of Evening: Stories by Mary Yukari Waters
Claiming Her Heart by Lili Valente


readsbookonline.com Copyright 2016 - 2024