Read The Glass Cage: Automation and Us Online
Authors: Nicholas Carr
From rock climbers to surgeons to pianists, Mihaly Csikszentmihalyi explains, people who “routinely find deep enjoyment in an activity illustrate how an organized set of challenges and a corresponding set of skills result in optimal experience.” The jobs or hobbies they engage in “afford rich opportunities for action,” while the skills they develop allow them to make the most of those opportunities. The ability to act with aplomb in the world turns all of us into artists. “The effortless absorption experienced by the practiced artist at work on a difficult project always is premised upon earlier mastery of a complex body of skills.”
33
When automation distances us from our work, when it gets between us and the world, it erases the artistry from our lives.
“S
INCE 1903
I
HAVE HAD UNDER OBSERVATION CONSTANTLY
from two to one hundred dancing mice.” So confessed the Harvard psychologist Robert M. Yerkes in the opening chapter of his 1907 book
The Dancing Mouse
, a 290-page paean to a rodent. But not just any rodent. The dancing mouse, Yerkes predicted, would prove as important to the behavioralist as the frog was to the anatomist.
When a local Cambridge doctor presented a pair of Japanese dancing mice to the Harvard Psychological Laboratory as a gift, Yerkes was underwhelmed. It seemed “an unimportant incident in the course of my scientific work.” But in short order he became infatuated with the tiny creatures and their habit of “whirling around on the same spot with incredible rapidity.” He bred scores of them, assigning each a number and keeping a meticulous log of its markings, gender, birth date, and ancestry. A “really admirable animal,” the dancing mouse was, he wrote, smaller and weaker than the average mouse—it was barely able to hold itself upright or “cling to an object”—but it proved “an ideal subject for the experimental study of many of the problems of animal behavior.” The breed was “easily cared for, readily tamed, harmless, incessantly active, and it lends itself satisfactorily to a large number of experimental situations.”
1
At the time, psychological research using animals was still new. Ivan Pavlov had only begun his experiments on salivating dogs in the 1890s, and it wasn’t until 1900 that an American graduate student named Willard Small dropped a rat into a maze and watched it scurry about. With his dancing mice, Yerkes greatly expanded the scope of animal studies. As he catalogued in
The Dancing Mouse
, he used the rodents as test subjects in the exploration of, among other things, balance and equilibrium, vision and perception, learning and memory, and the inheritance of behavioral traits. The mice were “experiment-impelling,” he reported. “The longer I observed and experimented with them, the more numerous became the problems which the dancers presented to me for solution.”
2
Early in 1906, Yerkes began what would turn out to be his most important and influential experiments on the dancers. Working with his student John Dillingham Dodson, he put, one by one, forty of the mice into a wooden box. At the far end of the box were two passageways, one painted white, the other black. If a mouse tried to enter the black passageway, it received, as Yerkes and Dodson later wrote, “a disagreeable electric shock.” The intensity of the jolt varied. Some mice were given a weak shock, others were given a strong one, and still others were given a moderate one. The researchers wanted to see if the strength of the stimulus would influence the speed with which the mice learned to avoid the black passage and go into the white one. What they discovered surprised them. The mice receiving the weak shock were relatively slow to distinguish the white and the black passageways, as might be expected. But the mice receiving the strong shock exhibited equally slow learning. The rodents quickest to understand their situation and modify their behavior were the ones given a moderate shock. “Contrary to our expectations,” the scientists reported, “this set of experiments did not prove that the rate of habit-formation increases with increase in the strength of the electric stimulus up to the point at which the shock becomes positively injurious. Instead an intermediate range of intensity of stimulation proved to be most favorable to the acquisition of a habit.”
3
A subsequent series of tests brought another surprise. The scientists put a new group of mice through the same drill, but this time they increased the brightness of the light in the white passageway and dimmed the light in the black one, strengthening the visual contrast between the two. Under this condition, the mice receiving the strongest shock were the quickest to avoid the black doorway. Learning didn’t fall off as it had in the first go-round. Yerkes and Dodson traced the difference in the rodents’ behavior to the fact that the setup of the second experiment had made things easier for the animals. Thanks to the greater visual contrast, the mice didn’t have to think as hard in distinguishing the passageways and associating the shock with the dark corridor. “The relation of the strength of electrical stimulus to rapidity of learning or habit-formation depends upon the difficultness of the habit,” they explained.
4
As a task becomes harder, the optimum amount of stimulation decreases. In other words, when the mice faced a really tough challenge, both an unusually weak stimulus and an unusually strong stimulus impeded their learning. In something of a Goldilocks effect, a moderate stimulus inspired the best performance.
Since its publication in 1908, the paper that Yerkes and Dodson wrote about their experiments, “The Relation of Strength of Stimulus to Rapidity of Habit-Formation,” has come to be recognized as a landmark in the history of psychology. The phenomenon they discovered, known as the Yerkes-Dodson law, has been observed, in various forms, far beyond the world of dancing mice and differently colored doorways. It affects people as well as rodents. In its human manifestation, the law is usually depicted as a bell curve that plots the relation of a person’s performance at a difficult task to the level of mental stimulation, or arousal, the person is experiencing.
At very low levels of stimulation, the person is so disengaged and uninspired as to be moribund; performance flat-lines. As stimulation picks up, performance strengthens, rising steadily along the left side of the bell curve until it reaches a peak. Then, as stimulation continues to intensify, performance drops off, descending steadily down the right side of the bell. When stimulation reaches its most intense level, the person essentially becomes paralyzed with stress; performance again flat-lines. Like dancing mice, we humans learn and perform best when we’re at the peak of the Yerkes-Dodson curve, where we’re challenged but not overwhelmed. At the top of the bell is where we enter the state of flow.
The Yerkes-Dodson law has turned out to have particular pertinence to the study of automation. It helps explain many of the unexpected consequences of introducing computers into work places and processes. In automation’s early days, it was thought that software, by handling routine chores, would reduce people’s workload and enhance their performance. The assumption was that workload and performance were inversely correlated. Ease a person’s mental strain, and she’ll be smarter and sharper on the job. The reality has turned out to be more complicated. Sometimes, computers succeed in moderating workload in a way that allows a person to excel at her work, devoting her full attention to the most pressing tasks. In other cases, automation ends up reducing workload too much. The worker’s performance suffers as she drifts to the left side of the Yerkes-Dodson curve.
We all know about the ill effects of information overload. It turns out that information underload can be equally debilitating. However well intentioned, making things easy for people can backfire. Human-factors scholars Mark Young and Neville Stanton have found evidence that a person’s “attentional capacity” actually “shrinks to accommodate reductions in mental workload.” In the operation of automated systems, they argue, “underload is possibly of greater concern [than overload], as it is more difficult to detect.”
5
Researchers worry that the lassitude produced by information underload is going to be a particular danger with coming generations of automotive automation. As software takes over more steering and braking chores, the person behind the wheel won’t have enough to do and will tune out. Making matters worse, the driver will likely have received little or no training in the use and risks of automation. Some routine accidents may be avoided, but we’re going to end up with even more bad drivers on the road.
In the worst cases, automation actually places added and unexpected demands on people, burdening them with extra work and pushing them to the right side of the Yerkes-Dodson curve. Researchers refer to this as the “automation paradox.” As Mark Scerbo, a human-factors expert at Virginia’s Old Dominion University, explains, “The irony behind automation arises from a growing body of research demonstrating that automated systems often
increase
workload and create
unsafe
working conditions.”
6
If, for example, the operator of a highly automated chemical plant is suddenly plunged into a fast-moving crisis, he may be overwhelmed by the need to monitor information displays and manipulate various computer controls while also following checklists, responding to alerts and alarms, and taking other emergency measures. Instead of relieving him of distractions and stress, computerization forces him to deal with all sorts of additional tasks and stimuli. Similar problems crop up during cockpit emergencies, when pilots are required to input data into their flight computers and scan information displays even as they’re struggling to take manual control of the plane. Anyone who’s gone off course while following directions from a mapping app knows firsthand how computer automation can cause sudden spikes in workload. It’s not easy to fiddle with a smartphone while driving a car.
What we’ve learned is that automation has a sometimes-tragic tendency to increase the complexity of a job at the worst possible moment—when workers already have too much to handle. The computer, introduced as an aid to reduce the chances of human error, ends up making it more likely that people, like shocked mice, will make the wrong move.
L
ATE IN THE SUMMER OF 2005
, researchers at the venerable RAND Corporation in California made a stirring prediction about the future of American medicine. Having completed what they called “the most detailed analysis ever conducted of the potential benefits of electronic medical records,” they declared that the U.S. health-care system “could save more than $81 billion annually and improve the quality of care” if hospitals and physicians automated their record keeping. The savings and other benefits, which RAND had estimated “using computer simulation models,” made it clear, one of the think tank’s top scientists said, “that it is time for the government and others who pay for health care to aggressively promote health information technology.”
1
The last sentence in a subsequent report detailing the research underscored the sense of urgency: “The time to act is now.”
2
When the RAND study appeared, excitement about the computerization of medicine was already running high. Early in 2004, George W. Bush had issued a presidential order establishing the Health Information Technology Adoption Initiative with the goal of digitizing most U.S. medical records within ten years. By the end of 2004, the federal government was handing out millions of dollars in grants to encourage the purchase of automated systems by doctors and hospitals. In June of 2005, the Department of Health and Human Services established a task force of government officials and industry executives, the American Health Information Community, to help spur the adoption of electronic medical records. The RAND research, by putting the anticipated benefits of electronic records into hard and seemingly reliable numbers, stoked both the excitement and the spending. As the
New York Times
would later report, the study “helped drive explosive growth in the electronic records industry and encouraged the federal government to give billions of dollars in financial incentives to hospitals and doctors that put the systems in place.”
3
Shortly after being sworn in as president in 2009, Barack Obama cited the RAND numbers when he announced a program to dole out an additional $30 billion in government funds to subsidize purchases of electronic medical record (EMR) systems. A frenzy of investment ensued, as some three hundred thousand doctors and four thousand hospitals availed themselves of Washington’s largesse.
4
Then, in 2013, just as Obama was being sworn in for a second term, RAND issued a new and very different report on the prospects for information technology in health care. The exuberance was gone; the tone now was chastened and apologetic. “Although the use of health IT has increased,” the authors of the paper wrote, “quality and efficiency of patient care are only marginally better. Research on the effectiveness of health IT has yielded mixed results. Worse yet, annual aggregate expenditures on health care in the United States have grown from approximately $2 trillion in 2005 to roughly $2.8 trillion today.” Worst of all, the EMR systems that doctors rushed to install with taxpayer money are plagued by problems with “interoperability.” The systems can’t talk to each other, which leaves critical patient data locked up in individual hospitals and doctors’ offices. One of the great promises of health IT has always been that it would, as the RAND authors noted, allow “a patient or provider to access needed health information anywhere at any time,” but because current EMR applications employ proprietary formats and conventions, they simply “enforce brand loyalty to a particular health care system.” While RAND continued to express high hopes for the future, it confessed that the “rosy scenario” in its original report had not panned out.
5