The Glass Cage: Automation and Us (23 page)

And because a person’s skills “deteriorate when they are not used,” she added, even an experienced system operator will eventually begin to act like “an inexperienced one” if his main job consists of watching rather than acting. As his instincts and reflexes grow rusty from disuse, he’ll have trouble spotting and diagnosing problems, and his responses will be slow and deliberate rather than quick and automatic. Combined with the loss of situational awareness, the degradation of know-how raises the odds that when something goes wrong, as it sooner or later will, the operator will react ineptly. And once that happens, system designers will work to place even greater limits on the operator’s role, taking him further out of the action and making it more likely that he’ll mess up in the future. The assumption that the human being will be the weakest link in the system becomes self-fulfilling.

E
RGONOMICS, THE
art and science of fitting tools and workplaces to the people who use them, dates back at least to the Ancient Greeks. Hippocrates, in “On Things Relating to the Surgery,” provides precise instructions for how operating rooms should be lit and furnished, how medical instruments should be arranged and handled, even how surgeons should dress. In the design of many Greek tools, we see evidence of an exquisite consideration of the ways an implement’s form, weight, and balance affect a worker’s productivity, stamina, and health. In early Asian civilizations, too, there are signs that the instruments of labor were carefully designed with the physical and psychological well-being of the worker in mind.
13

It wasn’t until the Second World War, though, that ergonomics began to emerge, together with its more theoretical cousin cybernetics, as a formal discipline. Many thousands of inexperienced soldiers and other recruits had to be entrusted with complicated and dangerous weapons and machinery, and there was little time for training. Awkward designs and confusing controls could no longer be tolerated. Thanks to trailblazing thinkers like Norbert Wiener and U.S. Air Force psychologists Paul Fitts and Alphonse Chapanis, military and industrial planners came to appreciate that human beings play as integral a role in the successful workings of a complex technological system as do the system’s mechanical components and electronic regulators. You can’t optimize a machine and then force the worker to adapt to it, in rigid Taylorist fashion; you have to design the machine to suit the worker.

Inspired at first by the war effort and then by the drive to incorporate computers into commerce, government, and science, a large and dedicated group of psychologists, physiologists, neurobiologists, engineers, sociologists, and designers began to devote their varied talents to studying the interactions of people and machines. Their focus may have been the battlefield and the factory, but their aspiration was deeply humanistic: to bring people and technology together in a productive, resilient, and safe symbiosis, a harmonious human-machine partnership that would get the best from both sides. If ours is an age of complex systems, then ergonomists are our metaphysicians.

At least they should be. All too often, discoveries and insights from the field of ergonomics, or, as it’s now commonly known, human-factors engineering, are ignored or given short shrift. Concerns about the effects of computers and other machines on people’s minds and bodies have routinely been trumped by the desire to achieve maximum efficiency, speed, and precision—or simply to turn as big a profit as possible. Software programmers receive little or no training in ergonomics, and they remain largely oblivious to relevant human-factors research. It doesn’t help that engineers and computer scientists, with their strict focus on math and logic, have a natural antipathy toward the “softer” concerns of their counterparts in the human-factors field. A few years before his death in 2006, the ergonomics pioneer David Meister, recalling his own career, wrote that he and his colleagues “always worked against the odds so that anything that was accomplished was almost unexpected.” The course of technological progress, he wistfully concluded, “is tied to the profit motive; consequently, it has little appreciation of the human.”
14

It wasn’t always so. People first began thinking about technological progress as a force in history in the latter half of the eighteenth century, when the scientific discoveries of the Enlightenment began to be translated into the practical machinery of the Industrial Revolution. That was also, and not coincidentally, a time of political upheaval. The democratic, humanitarian ideals of the Enlightenment culminated in the revolutions in America and France, and those ideals also infused society’s view of science and technology. Technical advances were valued—by intellectuals, if not always by workers—as means to political reform. Progress was defined in social terms, with technology playing a supporting role. Enlightenment thinkers such as Voltaire, Joseph Priestley, and Thomas Jefferson saw, in the words of the cultural historian Leo Marx, “the new sciences and technologies not as ends in themselves, but as instruments for carrying out a comprehensive transformation of society.”

By the middle of the nineteenth century, however, the reformist view had, at least in the United States, been eclipsed by a new and very different concept of progress in which technology itself played the starring role. “With the further development of industrial capitalism,” writes Marx, “Americans celebrated the advance of science and technology with increasing fervor, but they began to detach the idea from the goal of social and political liberation.” Instead, they embraced “the now familiar view that innovations in science-based technologies are in themselves a sufficient and reliable basis for progress.”
15
New technology, once valued as a means to a greater good, came to be revered as a good in itself.

It’s hardly a surprise, then, that in our own time the capabilities of computers have, as Bainbridge suggested, determined the division of labor in complex automated systems. To boost productivity, reduce labor costs, and avoid human error—to further progress—you simply allocate control over as many activities as possible to software, and as software’s capabilities advance, you extend the scope of its authority even further. The more technology, the better. The flesh-and-blood operators are left with responsibility only for those tasks that the designers can’t figure out how to automate, such as watching for anomalies or providing an emergency backup in the event of a system failure. People are pushed further and further out of what engineers term “the loop”—the cycle of action, feedback, and decision making that controls a system’s moment-by-moment operations.

Ergonomists call the prevailing approach
technology-centered automation
. Reflecting an almost religious faith in technology, and an equally fervent distrust of human beings, it substitutes misanthropic goals for humanistic ones. It turns the glib “who needs humans?” attitude of the technophilic dreamer into a design ethic. As the resulting machines and software tools make their way into workplaces and homes, they carry that misanthropic ideal into our lives. “Society,” writes Donald Norman, a cognitive scientist and author of several influential books about product design, “has unwittingly fallen into a machine-centered orientation to life, one that emphasizes the needs of technology over those of people, thereby forcing people into a supporting role, one for which we are most unsuited. Worse, the machine-centered viewpoint compares people to machines and finds us wanting, incapable of precise, repetitive, accurate actions.” Although it now “pervades society,” this view warps our sense of ourselves. “It emphasizes tasks and activities that we should not be performing and ignores our primary skills and attributes—activities that are done poorly, if at all, by machines. When we take the machine-centered point of view, we judge things on artificial, mechanical merits.”
16

It’s entirely logical that those with a mechanical bent would take a mechanical view of life. The impetus behind invention is often, as Norbert Wiener put it, “the desires of the gadgeteer to see the wheels go round.”
17
And it’s equally logical that such people would come to control the design and construction of the intricate systems and software programs that now govern or mediate society’s workings. They’re the ones who know the code. As society becomes ever more computerized, the programmer becomes its unacknowledged legislator. By defining the human factor as a peripheral concern, the technologist also removes the main impediment to the fulfillment of his desires; the unbridled pursuit of technological progress becomes self-justifying. To judge technology primarily on its technological merits is to give the gadgeteer carte blanche.

In addition to fitting the dominant ideology of progress, the bias to let technology guide decisions about automation has practical advantages. It greatly simplifies the work of the system builders. Engineers and programmers need only take into account what computers and machines can do. That allows them to narrow their focus and winnow a project’s specifications. It relieves them of having to wrestle with the complexities, vagaries, and frailties of the human body and psyche. But however compelling as a design tactic, the simplicity of technology-centered automation is a mirage. Ignoring the human factor does not remove the human factor.

In a much-cited 1997 paper, “Automation Surprises,” the human-factors experts Nadine Sarter, David Woods, and Charles Billings traced the origins of the technology-focused approach. They described how it grew out of and continues to reflect the “myths, false hopes, and misguided intentions associated with modern technology.” The arrival of the computer, first as an analogue machine and then in its familiar digital form, encouraged engineers and industrialists to take an idealistic view of electronically controlled systems, to see them as a kind of cure-all for human inefficiency and fallibility. The order and cleanliness of computer operations and outputs seemed heaven-sent when contrasted with the earthly messiness of human affairs. “Automation technology,” Sarter and her colleagues wrote, “was originally developed in hope of increasing the precision and economy of operations while, at the same time, reducing operator workload and training requirements. It was considered possible to create an autonomous system that required little if any human involvement and therefore reduced or eliminated the opportunity for human error.” That belief led, again with pristine logic, to the further assumption that “automated systems could be designed without much consideration for the human element in the overall system.”
18

The desires and beliefs underpinning the dominant design approach, the authors continued, have proved naive and damaging. While automated systems have often enhanced the “precision and economy of operations,” they have fallen short of expectations in other respects, and they have introduced a whole new set of problems. Most of the shortcomings stem from “the fact that even highly automated systems still require operator involvement and therefore communication and coordination between human and machine.” But because the systems have been designed without sufficient regard for the people who operate them, their communication and coordination capabilities are feeble. In consequence, the computerized systems lack the “complete knowledge” of the work and the “comprehensive access to the outside world” that only people can provide. “Automated systems do not know when to initiate communication with the human about their intentions and activities or when to request additional information from the human. They do not always provide adequate feedback to the human who, in turn, has difficulties tracking automation status and behavior and realizing there is a need to intervene to avoid undesirable actions by the automation.” Many of the problems that bedevil automated systems stem from “the failure to design human-machine interaction to exhibit the basic competencies of human-human interaction.”
19

Engineers and programmers compound the problems when they hide the workings of their creations from the operators, turning every system into an inscrutable black box. Normal human beings, the unstated assumption goes, don’t have the smarts or the training to grasp the intricacies of a software program or robotic apparatus. If you tell them too much about the algorithms or procedures that govern its operations and decisions, you’ll just confuse them or, worse yet, encourage them to tinker with the system. It’s safer to keep people in the dark. Here again, though, the attempt to avoid human errors by removing personal responsibility ends up making the errors more likely. An ignorant operator is a dangerous operator. As the University of Iowa human-factors professor John Lee explains, it’s common for an automated system to use “control algorithms that are at odds with the control strategies and mental model of the person [operating it].” If the person doesn’t understand those algorithms, there’s no way she can “anticipate the actions and limits of the automation.” The human and the machine, operating under conflicting assumptions, end up working at cross-purposes. People’s inability to comprehend the machines they use can also undermine their self-confidence, Lee reports, which “can make them less inclined to intervene” when something goes wrong.
20

Other books

How Music Works by David Byrne
The Tailor of Panama by John le Carré
Dark Universe by Devon Herrera
Soul Mates Bind by Ross, Sandra
Something True by Karelia Stetz-Waters
We Found Love by Kade Boehme, Allison Cassatta
The Nurse's Love (BWWM Romance) by Tyra Brown, BWWM Crew
Vengeance 10 by Joe Poyer


readsbookonline.com Copyright 2016 - 2024