Read The Glass Cage: Automation and Us Online
Authors: Nicholas Carr
Because automated systems usually work fine even when we lose awareness or objectivity, we are rarely penalized for our complacency or our bias. That ends up compounding the problems, as Parasuraman pointed out in a 2010 paper written with his German colleague Dietrich Manzey. “Given the usually high reliability of automated systems, even highly complacent and biased behavior of operators rarely leads to obvious performance consequences,” the scholars wrote. The lack of negative feedback can in time induce “a cognitive process that resembles what has been referred to as ‘learned carelessness.’ ”
11
Think about driving a car when you’re sleepy. If you begin to nod off and drift out of your lane, you’ll usually go onto a rough shoulder, hit a rumble strip, or earn a honk from another motorist—signals that jolt you back awake. If you’re in a car that automatically keeps you within a lane by monitoring the lane markers and adjusting the steering, you won’t receive such warnings. You’ll drift into a deeper slumber. Then if something unexpected happens—an animal runs into the road, say, or a car stops short in front of you—you’ll be much more likely to have an accident. By isolating us from negative feedback, automation makes it harder for us to stay alert and engaged. We tune out even more.
O
UR SUSCEPTIBILITY
to complacency and bias explains how a reliance on automation can lead to errors of both commission and omission. We accept and act on information that turns out to be incorrect or incomplete, or we fail to see things that we should have seen. But the way that a reliance on computers weakens awareness and attentiveness also points to a more insidious problem. Automation tends to turn us from actors into observers. Instead of manipulating the yoke, we watch the screen. That shift may make our lives easier, but it can also inhibit our ability to learn and to develop expertise. Whether automation enhances or degrades our performance in a given task, over the long run it may diminish our existing skills or prevent us from acquiring new ones.
Since the late 1970s, cognitive psychologists have been documenting a phenomenon called the generation effect. It was first observed in studies of vocabulary, which revealed that people remember words much better when they actively call them to mind—when they
generate
them—than when they read them from a page. In one early and famous experiment, conducted by University of Toronto psychologist Norman Slamecka, people used flash cards to memorize pairs of antonyms, like
hot
and
cold
. Some of the test subjects were given cards that had both words printed in full, like this:
HOT : COLD
Others used cards that showed only the first letter of the second word, like this:
HOT : C
The people who used the cards with the missing letters performed much better in a subsequent test measuring how well they remembered the word pairs. Simply forcing their minds to fill in a blank, to act rather than observe, led to stronger retention of information.
12
The generation effect, it has since become clear, influences memory and learning in many different circumstances. Experiments have revealed evidence of the effect in tasks that involve not only remembering letters and words but also remembering numbers, pictures, and sounds, completing math problems, answering trivia questions, and reading for comprehension. Recent studies have also demonstrated the benefits of the generation effect for higher forms of teaching and learning. A 2011 paper in
Science
showed that students who read a complex science assignment during a study period and then spent a second period recalling as much of it as possible, unaided, learned the material more fully than students who read the assignment repeatedly over the course of four study periods.
13
The mental act of generation improves people’s ability to carry out activities that, as education researcher Britte Haugan Cheng has written, “require conceptual reasoning and requisite deeper cognitive processing.” Indeed, Cheng says, the generation effect appears to strengthen as the material generated by the mind becomes more complex.
14
Psychologists and neuroscientists are still trying to figure out what goes on in our minds to give rise to the generation effect. But it’s clear that deep cognitive and memory processes are involved. When we work hard at something, when we make it the focus of attention and effort, our mind rewards us with greater understanding. We remember more and we learn more. In time, we gain know-how, a particular talent for acting fluidly, expertly, and purposefully in the world. That’s hardly a surprise. Most of us know that the only way to get good at something is by actually doing it. It’s easy to gather information quickly from a computer screen—or from a book, for that matter. But true knowledge, particularly the kind that lodges deep in memory and manifests itself in skill, is harder to come by. It requires a vigorous, prolonged struggle with a demanding task.
The Australian psychologists Simon Farrell and Stephan Lewandowsky made the connection between automation and the generation effect in a paper published in 2000. In Slamecka’s experiment, they pointed out, supplying the second word of an antonym pair, rather than forcing a person to call the word to mind, “can be considered an instance of automation because a human activity—generation of the word ‘COLD’ by participants—has been obviated by a printed stimulus.” By extension, “the reduction in performance that is observed when generation is replaced by reading can be considered a manifestation of complacency.”
15
That helps illuminate the cognitive cost of automation. When we carry out a task or a job on our own, we seem to use different mental processes than when we rely on the aid of a computer. When software reduces our engagement with our work, and in particular when it pushes us into a more passive role as observer or monitor, we circumvent the deep cognitive processing that underpins the generation effect. As a result, we hamper our ability to gain the kind of rich, real-world knowledge that leads to know-how. The generation effect requires precisely the kind of struggle that automation seeks to alleviate.
In 2004, Christof van Nimwegen, a cognitive psychologist at Utrecht University in the Netherlands, began a series of simple but ingenious experiments to investigate software’s effects on memory formation and the development of expertise.
16
He recruited two groups of people and had them play a computer game based on a classic logic puzzle called Missionaries and Cannibals. To complete the puzzle, a player has to transport across a hypothetical river five missionaries and five cannibals (or, in van Nimwegen’s version, five yellow balls and five blue ones), using a boat that can accommodate no more than three passengers at a time. The tricky part is that there can never be more cannibals than missionaries in one place, either in the boat or on the riverbanks. (If outnumbered, the missionaries become the cannibals’ dinner, one assumes.) Figuring out the series of boat trips that can best accomplish the task requires rigorous analysis and careful planning.
One of van Nimwegen’s groups worked on the puzzle using software that provided step-by-step guidance, offering, for instance, on-screen prompts to highlight which moves were permissible and which weren’t. The other group used a rudimentary program that offered no assistance. As you’d expect, the people using the helpful software made faster progress at the outset. They could follow the prompts rather than having to pause before each move to recall the rules and figure out how they applied to the new situation. But as the game advanced, the players using the rudimentary software began to excel. In the end, they were able to work out the puzzle more efficiently, with significantly fewer wrong moves, than their counterparts who were receiving assistance. In his report on the experiment, van Nimwegen concluded that the subjects using the rudimentary program developed a clearer conceptual understanding of the task. They were better able to think ahead and plot a successful strategy. Those relying on guidance from the software, by contrast, often became confused and would “aimlessly click around.”
The cognitive penalty imposed by the software aids became even clearer eight months later, when van Nimwegen had the same people work through the puzzle again. Those who had earlier used the rudimentary software finished the game almost twice as quickly as their counterparts. The subjects using the basic program, he wrote, displayed “more focus” during the task and “better imprinting of knowledge” afterward. They enjoyed the benefits of the generation effect. Van Nimwegen and some of his Utrecht colleagues went on to conduct experiments involving more realistic tasks, such as using calendar software to schedule meetings and event-planning software to assign conference speakers to rooms. The results were the same. People who relied on the help of software prompts displayed less strategic thinking, made more superfluous moves, and ended up with a weaker conceptual understanding of the assignment. Those using unhelpful programs planned better, worked smarter, and learned more.
17
What van Nimwegen observed in his laboratory—that when we automate cognitive tasks like problem solving, we hamper the mind’s ability to translate information into knowledge and knowledge into know-how—is also being documented in the real world. In many businesses, managers and other professionals depend on so-called expert systems to sort and analyze information and suggest courses of action. Accountants, for example, use decision-support software in corporate audits. The applications speed the work, but there are signs that as the software becomes more capable, the accountants become less so. One study, conducted by a group of Australian professors, examined the effects of the expert systems used by three international accounting firms. Two of the companies employed advanced software that, based on an accountant’s answers to basic questions about a client, recommended a set of relevant business risks to include in the client’s audit file. The third firm used simpler software that provided a list of potential risks but required the accountant to review them and manually select the pertinent ones for the file. The researchers gave accountants from each firm a test measuring their knowledge of risks in industries in which they had performed audits. Those from the firm with the less helpful software displayed a significantly stronger understanding of different forms of risk than did those from the other two firms. The decline in learning associated with advanced software affected even veteran auditors—those with more than five years of experience at their current firm.
18
Other studies of expert systems reveal similar effects. The research indicates that while decision-support software can help novice analysts make better judgments in the short run, it can also make them mentally lazy. By diminishing the intensity of their thinking, the software retards their ability to encode information in memory, which makes them less likely to develop the rich tacit knowledge essential to true expertise.
19
The drawbacks to automated decision aids can be subtle, but they have real consequences, particularly in fields where analytical errors have far-reaching repercussions. Miscalculations of risk, exacerbated by high-speed computerized trading programs, played a major role in the near meltdown of the world’s financial system in 2008. As Tufts University management professor Amar Bhidé has suggested, “robotic methods” of decision making led to a widespread “judgment deficit” among bankers and other Wall Street professionals.
20
While it may be impossible to pin down the precise degree to which automation figured in the disaster, or in subsequent fiascos like the 2010 “flash crash” on U.S. exchanges, it seems prudent to take seriously any indication that a widely used technology may be diminishing the knowledge or clouding the judgment of people in sensitive jobs. In a 2013 paper, computer scientists Gordon Baxter and John Cartlidge warned that a reliance on automation is eroding the skills and knowledge of financial professionals even as computer-trading systems make financial markets more risky.
21
Some software writers worry that their profession’s push to ease the strain of thinking is taking a toll on their own skills. Programmers today often use applications called integrated development environments, or IDEs, to aid them in composing code. The applications automate many tricky and time-consuming chores. They typically incorporate auto-complete, error-correction, and debugging routines, and the more sophisticated of them can evaluate and revise the structure of a program through a process known as refactoring. But as the applications take over the work of coding, programmers lose opportunities to practice their craft and sharpen their talent. “Modern IDEs are getting ‘helpful’ enough that at times I feel like an IDE operator rather than a programmer,” writes Vivek Haldar, a veteran software developer with Google. “The behavior all these tools encourage is not ‘think deeply about your code and write it carefully,’ but ‘just write a crappy first draft of your code, and then the tools will tell you not just what’s wrong with it, but also how to make it better.’ ” His verdict: “Sharp tools, dull minds.”
22