These are called substitute risks. New hazards materialize directly as a result of attempts to reduce hazards. Fireproof asbestos is toxic, but most of its substitutes are equally if not more toxic. Furthermore, the removal of asbestos greatly increases its danger compared to the low risk of letting it remain in place in buildings. The Precautionary Principle is oblivious to the notion of substitute risks.
In general the Precautionary Principle is biased against anything new. Many established technologies and “natural” processes have unexamined faults as great as those of any new technology. But the Precautionary Principle establishes a drastically elevated threshold for things that are new. In effect it grandfathers in the risks of the old, or the “natural.” A few examples: Crops raised without the shield of pesticides generate more of their own natural pesticides to combat insects, but these indigenous toxins are not subject to the Precautionary Principle because they aren't “new.” The risks of new plastic water pipes are not compared with the risks of old metal pipes. The risks of DDT are not put in context with the old risks of dying of malaria.
The surest remedy for uncertainty is faster, better scientific studies. Science is a process of testing that will never eliminate uncertainty totally, and its consensus on particular questions will shift over time. But the consensus of evidence-based science is more reliable than anything else we have, including the hunches of precaution. More science, done openly by skeptics and enthusiasts, will enable us to sooner say: “This is okay to use” or “This is not okay to use.” Once a consensus forms, we can regulate reasonablyâas we have with lead in gasoline, tobacco, seat belts, and many other mandated improvements in society.
But in the meantime we should count on uncertainty. Even though we've learned to expect unintended consequences from every innovation, the particular unintended consequences are rarely foreseen. “Technology always does more than we intend; we know this so well that it has actually become part of our intentions,” writes Langdon Winner. “Imagine a world in which technologies accomplish only the specific purposes one had in mind in advance and nothing more. It would be a radically constricted world and one totally unlike the world we now inhabit.” We know technology will produce problems; we just don't know which new problems.
Because of the inherent uncertainties in any model, laboratory, simulation, or test, the only reliable way to assess a new technology is to let it run in place. An idea has to inhabit its new form sufficiently so that it can begin to express secondary effects. When a technology is tested soon after its birth, only its primary effects will be visible. But in most cases it is technology's unintended second-order effects that are the root of subsequent problems.
Second-order effectsâthe ones that usually overtake societyâare rarely captured by forecasts, lab experiments, or white papers. Science-fiction guru Isaac Asimov made the astute observation that in the age of horses many ordinary people eagerly and easily imagined a horseless carriage. The automobile was an obvious anticipation since it was an extension of the first-order dynamics of a cartâa vehicle that goes forward by itself. An automobile would do everything a horse-pulled carriage did but without the horse. But Asimov went on to remark how difficult it was to imagine the second-order consequences of a horseless carriage, such as drive-in movie theaters, paralyzing traffic jams, and road rage.
Second-order effects often require a certain density, a semi-ubiquity, to reveal themselves. The main safety concern with the first automobiles centered on the safety of their occupantsâthe worry that the gas engines would blow up or that the brakes would fail. But the real challenge of autos emerged only in aggregate, when there were hundreds of thousands of carsâthe accumulated exposure to their minute pollutants and their ability to kill others outside the car at high speeds, not to mention the disruptions of suburbs and long commutesâall second-order effects.
A common source of unforecastable effects of technologies stems from the way they interact with other technologies. In a 2005 debriefing that analyzed why the now-defunct U.S. Office for Technology Assessment, which existed from 1972 to 1995, did not have more of an impact in assessing upcoming technology, the researchers concluded:
While plausible (although always uncertain) forecasts can be generated for very specific and fairly evolved technologies (e.g., the supersonic transport; a nuclear reactor; a particular pharmaceutical product), the radical transforming capacity of technology comes not from individual artifacts but from interacting subsets of technologies that permeate society.
In short, crucial second-order effects are absent from small, precise experiments and sincere simulations of new technologies, and so an emerging technology must be tested in action and evaluated in real time. In other words, the risks of a particular technology have to be determined by trial and error in real life.
The appropriate response to a new idea should be to immediately try it out. And to keep trying it out, and testing it, as long as it exists. In fact, contrary to the Precautionary Principle, a technology can never be declared “proven safe.” It must be continuously tested with constant vigilance since it is constantly being reengineered by users and the coevolutionary technium it inhabits.
Technological systems “require continued attention, rebuilding and repair. Eternal vigilance is the price of artificial complexity,” says Langdon Winner. Stewart Brand elevates constant assessment to the level of the vigilance principle in his book on ecopragmatism,
Whole Earth Discipline:
“The emphasis of the vigilance principle is on liberty, the freedom to try things. The correction for emergent problems is in ceaseless, fine-grained monitoring.” He then suggests three categories that we might assign a probationary technology: “1) provisionally unsafe until proven unsafe; 2) provisionally safe until proven safe; 3) provisionally beneficial until proven beneficial.”
Provisional
is the operative word. Another term for Brand's approach might be
eternally provisional
.
In his book about unintended consequences of technology,
Why Things Bite Back,
Edward Tenner spells out the nature of constant vigilance:
Technological optimism means in practice the ability to recognize bad surprises early enough to do something about them. . . . It also requires a second level of vigilance at increasingly porous national borders against the world exchange of problems. But vigilance does not end there. It is everywhere. It is in the random alertness tests that have replaced the “dead man's pedal” for train operators. It is in the rituals of computer backup, the legally mandated testing of everything from elevators to home smoke alarms, routine X-ray screening, securing and loading new computer-virus definitions. It is in the inspection of arriving travelers for products that might harbor pests. Even our alertness in crossing the street, second nature to urbanites now, was generally unnecessary before the eighteenth century. Sometimes vigilance is more of a reassuring ritual than a practical precaution, but with any luck it works.
The Amish practice something very similar. Their approach to the technium is founded on their very fundamental religious faith; their theology drives their technology. Yet paradoxically, the Amish are far more scientific than most secular professionals about which technology they adopt. Typical nonreligious consumers tend to accept technology “on faith” based on what the media says, with no testing at all. In contrast, the Amish perform four levels of empirical testing on a potential technology. Instead of hypothetical worst-case-scenario precaution, the Amish employ evidence-based technological assessment.
First, they discuss among themselves (sometimes in councils of their elders) the expected community consequences of an upcoming innovation. What happens if farmer Miller starts using solar panels to pump water? Once he has the panels, will he be tempted to use the electricity to run his refrigerators? What then? And where do the panels come from? In short, the Amish develop a hypothesis of the technology's impact. Second, they closely monitor the actual effect of use among a small set of early adopters to see if their observations confirm their hypothesis. How do the Miller family and their interactions with neighbors change as they use the new stuff? And third, will the elders remove a technology if it appears to be undesirable based on observed effect and then assess the impact of its removal to further confirm their hypothesis? Was the community as a whole any better off without this technology? Last, they constantly reevaluate. Today, after 100 years of debate and observation, their communities are still discussing the merits of automobiles, electrification, and phones. None of this is quantitative; the results are compressed into anecdotes. Stories about what happened to so-and-so with such-and-such technology are retold in gossip or printed in the pages of their newsletters and become the currency of this empirical testing.
Technologies are nearly living things. Like all evolving entities, they must be tested in action, by action. The only way to wisely evaluate our technological creations is to try them out in prototypes, then refine them in pilot programs. In living with them we can adjust our expectations, shift, test, and rerelease. In action we monitor alterations, then redefine our aims. Eventually, by living with what we create, we can redirect technologies to new jobs when we are not happy with their outcomes. We move with them instead of against them.
The principle of constant engagement is called the Proactionary Principle. Because it emphasizes provisional assessment and constant correction, it is a deliberate counterapproach to the Precautionary Principle. This framework was first articulated by Max More, radical transhumanist, in 2004. More began with ten guidelines, but I have reduced his ten principles to five proactions. Each proaction is a heuristic to guide us in assessing new technologies.
The five proactions are:
1. Anticipation
Anticipation is good. All tools of anticipation are valid. The more techniques we use, the better, because different techniques fit different technologies. Scenarios, forecasts, and outright science fiction give partial pictures, which is the best we can expect. Objective scientific measurement of models, simulations, and controlled experiments should carry greater weight, but these, too, are only partial. Actual early data should trump speculation. The anticipation process should try to imagine as many horrors as glories, as many glories as horrors, and if possible to anticipate ubiquity; what happens if everyone has this for free? Anticipation should not be a judgment. The purpose of anticipation is not to accurately predict what will happen with a technology, because all precise predictions are wrong, but to prepare a base for the next four steps. It is a way to rehearse future actions.
2. Continual Assessment
Or eternal vigilance. We have increasing means to quantifiably test everything we use all the time, not just once. By means of embedded technology we can turn daily use of technologies into large-scale experiments. No matter how much a new technology is tested at first, it should be continuously retested in real time. Technology provides us with more precise means of niche testing. Using communication technology, cheap genetic testing, and self-tracking tools, we can focus on how innovations fare in specific neighborhoods, subcultures, gene pools, ethnic groups, or user modes. Testing can also be continual, 24/7, rather than the just on first release. Further, new technology such as social media (today's Facebook) allows citizens to organize their own assessments and do their own sociological surveys. Testing is active and not passive. Constant vigilance is baked into the system.
3. Prioritization of Risks, Including Natural Ones
Risks are real but endless. Not all risks are equal. They must be weighted and prioritized. Known and proven threats to human and environmental health are given precedence over hypothetical risks. Furthermore, the risks of inaction and the risks of natural systems must be treated symmetrically. In Max More's words: “Treat technological risks on the same basis as natural risks; avoid underweighting natural risks and overweighting human-technological risks.”
4. Rapid Correction of Harm
When things go wrongâand they always willâharm should be remedied quickly and compensated in proportion to actual damages. The assumption that any given technology will create problems should be part of its process of creation. The software industry may offer a model for quick correction: Bugs are expected; they are not a reason to kill a product; instead they are employed to better the technology. Think of unintended consequences in other technologies, even fatal ones, as bugs that need to be corrected. The more sentient the technology, the easier it is to correct. Rapid restitution for harm done (which the software industry does not do) would also indirectly aid the adoption of future technologies. But restitution should be fair. Penalizing creators for hypothetical harm or even potential harm demeans justice and weakens the system, reducing honesty and penalizing those who act in good faith.
5. Not Prohibition but Redirection
Prohibition and relinquishment of dubious technologies do not work. Instead, find them new jobs. A technology can play different roles in society. It can have more than one expression. It can be set with different defaults. It can have more than one political cast. Since banning fails, redirect technologies into more convivial forms.
Â
Â
Â
To return to the question at the beginning of this chapter: What choices do we have in steering the inevitable progress of the technium?
We have the choice of how we treat our creations, where we place them, and how we train them with our values. The most helpful metaphor for understanding technology may be to consider humans as the parents of our technological children. As we do with our biological children, we can, and should, constantly hunt for the right mix of beneficial technological “friends” to cultivate our technological offspring's best side. We can't really change the nature of our children, but we can steer them to tasks and duties that match their talents.