Take photography. If the processing of color photography is centralized (as it was for 50 years by Kodak), that applies a different tenor to photography than if the processing is done by chips in the camera itself. Centralization fosters a type of self-censorship of what pictures you take, and it also adds a time lag for displaying the results, which slows learning and discourages spontaneity. To be able to take a colorful picture of anything and then review it instantly and cheaplyâthat changed the character of the same glass lenses and shutter. Another example: It is easy to inspect the components in a motor, but not in a can of paint. But chemical products could be made to reveal their component ingredients with extra information, as if they were motor parts; the labeling could trace their manufacturing process back to their source as pigments in the Earth or in oil and thus make them more transparent to control and to interaction. This more open expression of paint technology would be different, and maybe more useful. Final example: Radio broadcastingâa very old and easily manufactured technologyâis currently among the most heavily regulated technologies in most countries. This steep regulation by government has led to the current development of only a few bands of frequencies out of all those available, most of which remain underused. In an alternative system, radio spectrum could be allotted in a very different manner, potentially giving rise to cell phones that communicate directly with one another instead of through a local hub cell tower. The resulting alternative peer-to-peer broadcast system would yield a vastly different expression of radio.
Oftentimes the first job we assign to a technology is not at all ideal. For instance, DDT was an ecological disaster when assigned as an aerially sprayed insecticide on cotton crops. But restricted to the task of a household malaria remedy it shines as a public-health hero. Same technology, better job. It may take many tries, many jobs, many mistakes before we find a great role for a given technology.
The more autonomy our children (technological as well as biological) have, the more freedom they have to make mistakes. Our children's ability to create a disaster (or create a masterpiece) may even exceed our own, which is why parenting is both the most frustrating and the most rewarding thing we can do. By this measure our scariest offspring are forms of self-duplicating technology that already have significant potential autonomy. No creation of ours will test our patience and love as much as these will. And no technologies will test our ability to influence, steer, or guide the technium in the future as these will.
Self-duplication is old news in biology. It's the four-billion-year-old magic that allows nature to replenish herself, as one chicken hatches another chicken and so on. But self-duplication is a radical new force in the technium. The mechanical ability to make perfect copies of oneself and then occasionally create an improvement before copying, unleashes a type of independence that is not easily controlled by humans. Endless, ever-quickening cycles of reproduction, mutation, and bootstrapping can send a technological system into overdrive, leaving the rider far behind. As they zoom ahead, these technological creations will make new mistakes. Their unforeseeable achievements will amaze and terrify us.
The power of self-replication is now found in four fields of high technology: geno, robo, info, and nano. Geno stuff includes gene therapies, genetically modified organisms, synthetic life, and drastic genetic engineering of the human line. With genotechnology a new critter or new chromosome can be invented and released; it then reproduces forever, in theory.
Robo stuff is, of course, robots. Robots already work in factories making other robots, and at least one university lab has prototyped an autonomously self-assembling machine. Give this machine a pile of parts and it will assemble a copy of itself.
Info stuff is self-replicants such as computer viruses, artificial minds, and virtual personae built through data accumulation. Computer viruses have famously already mastered self-reproduction. Thousands infect hundreds of millions of computers. The holy grail of research into artificial learning and intelligence is, of course, to make an artificial mind smart enough to make another artificial mind smarter still.
Nano stuff is extremely tiny machines (as small as bacteria) that are designed for chores like eating oil or performing calculations or cleaning human arteries. Because they are so small, these tiny machines can work like mechanical computer circuits, and so in theory, they can be designed to self-assemble and reproduce like other computational programs. They would be a sort of like dry life, although this is many years away.
In these four areas the self-amplifying loops of self-duplication catapult the effects of these technologies into the future very quickly. Robots that make robots that make robots! Their accelerated cycles of creation can race so far ahead of our intentions that it is worrisome. Who's controlling the robo descendants?
In the geno world if we code changes into a gene line, for example, those changes can replicate down generations forever. And not just in family lines. Genes can easily migrate horizontally between species. So copies of new genesâbad or goodâmight disseminate through both time and space. As we know from the digital era, once copies are released they are hard to take back. If we can engineer an endless cascade of artificial minds inventing minds smarter than themselves (and us), what control do we have over the moral judgment of such creations? What if they start out with harmful prejudices?
Information shares this same avalanching property of replicating out of our control. Computer security experts claim that of the thousands of species of self-replicating worms and computer viruses invented by hackers to date, not one has gone extinct. They are here foreverâor as long as two machines still run.
Finally, nanotechnology promises marvelous super-micro-thingies that are constructed at the precision of single atoms. The threat of these nano-organisms breeding without limit until they cover everything is known as the “gray goo” scenario. For a number of reasons, I think the gray goo is scientifically unlikely, though some kind of self-reproducing nanostuff is inevitable. But it is very likely that at least a few fragile species of nanotechnology (not goo) will breed in the wild, in narrow, protected niches. Once a nanobug goes feral, it could be indelible.
As the technium gains in complexity, it will gain in autonomy. What the current crop of self-duplicating GRIN (geno, robo, info, nano) technologies reveal is the way in which this rising autonomy demands our attention and respect. In addition to all the usual difficulties that new technologies presentâshifting capabilities, unintended roles, hidden consequencesâself-replicating technologies add two more: amplification and acceleration. Tiny effects rapidly escalate into major upheaval as one generation amplifies another, in the same way innocent feedback in a microphone whisper can burst into a deafening screech. And by the same cycles of self-generation, the speed at which a replicating technology impacts the technium keeps accelerating. The effects are pushed so far downstream that it complicates our ability to proactively engage and test and try the technology out in the present.
This is a replay of an old story. The amazing, uplifting power of life itself is rooted in its ability to leverage self-replication, and now that power is being born in technology. The most powerful force in the world will become much more powerful as it gains in ability to self-generate, but this liquid dynamite presents a grand challenge in managing it.
A common reaction to the out-of-control nature of geno-, robo-, info- and nanotechnology is to call for a moratorium on their development. Ban them. In 2000 Bill Joy, the pioneer computer scientist who invented several key programming languages that run the internet, called upon his fellow scientists in genetic, robotic, and computer sciences to relinquish GRIN technologies that could potentially be weaponized, to give them up the way we gave up biological weapons. Under the guidance of the Precautionary Principle, the Canadian watchdog group ETC called for a moratorium on all nanotechnological research. The German equivalent of the EPA demanded a ban on products containing silver nanoparticles (used in antimicrobial coatings). Others would like to ban autopiloted automobiles from public roads, outlaw genetically engineered vaccines in children, or halt human gene therapy until such time as each invention can be proven to cause no harm.
This is exactly the wrong thing to do. These technologies are inevitable. And they will cause some degree of harm. After all, to point to only one example above, human-piloted cars cause great harm, killing millions of people each year worldwide. If robot-controlled cars killed “only” half a million people per year, it would be an improvement!
Yet their most important consequencesâboth positive and negativeâwon't be visible for generations. We don't have a choice in whether there will be genetically engineered crops everywhere. There will be. We do have a choice in the character of the genetic food systemâwhether its innovations are publicly or privately held, whether it is regulated by government or industry, whether we engineer it for generational use or only the next business quarter. As inexpensive communication systems circle the globe, they knit a thin cloak of nervous material around the planet, making an electronic “world brain” of some kind inevitable. But the full downsides, or upsides, of this world brain won't be measurable until it is operating. The choice for humans is, What kind of world brain would we like to make out of this envelope? Is the participation default open or closed? Is it easy to modify procedures, and share, or is modification difficult and burdensome? Are the controls proprietary? Is it easy to hide from? The details of the web can go in a hundred different ways, although the technologies themselves will bias us in certain directions. Yet how we express the inevitable global web is a significant choice we own. We can only shape technology's expression by engaging with it, by riding it with both arms around its neck.
To do that means to embrace those technologies now. To create them, turn them on, try them. This is the opposite of a moratorium. It's more like a try-atorium. The result would be a conversation, a deliberate engagement with the emerging technology. The faster these technologies spin into the future, the more essential it is that we ride them from the start.
Cloning, nanotech, network bots, and artificial intelligence (for just a few GRIN examples) need to be released within our embrace. Then we'll bend each this way and that. A better metaphor would be that we'll train the technology. As in the best animal and child training, positive aspects are reinforced with resources and negative aspects are starved until they diminish.
In one sense, self-amplifying GRIN-ologies are bullies, rogue technologies. They will need our utmost attention in order to be trained for consistent goodness. We need to invent appropriate long-term training technologies to guide them across the generations. The worst thing to do is banish and isolate them. Rather, we want to work with the bullying problem child. High-risk technologies need more chances for us to discover their true strengths. They need more of our investment and more opportunities to be tried. Prohibiting them only drives them underground, where their worst traits are emphasized.
There are already a few experiments to embed guiding heuristics in artificially intelligent systems as a means to make “moral” artificial intelligence, and other experiments to embed long-range control systems in genetic and nanosystems. We have an existing proof that such embedded principles workâin ourselves. If we can train our childrenâwho are the ultimate power-hungry, autonomous, generational rogue beingsâto be better than us, then we can train our GRINs.
As in raising our children, the real questionâand disagreementâlies in what values we want to transmit over generations. This is worth discussing, and I suspect that, as in real life, we won't all agree on the answers.
The message of the technium is that any choice is way better than no choice. That's why technology tends to tip the scales slightly toward the good, even though it produces so many problems. Let's say we invent a hypothetical new technology that can give immortality to 100 people, but at the cost of killing 1 other person prematurely. We could argue about what the real numbers would have to be to “balance out” (maybe it is 1,000 who never die, or a million, for one who does) but this bookkeeping ignores a critical fact: Because this life-extension technology now exists, there is a new choice between 1 dead and 100 immortal that did not exist before. This additional possibility or freedom or choiceâbetween immortality and deathâ
is good in itself
. So even if the result of this particular moral choice (100 immortal = 1 dead) is deemed a wash, the choice itself tips the balance a few percentage points to the good side. Multiply this tiny lean toward good by each of the million, 10 million, or 100 million inventions birthed in technology each year, and you can see why the technium tends to amplify the good slightly more than the evil. It compounds the good in the world because in addition to the direct good it brings, the arc of the technium keeps increasing choices, possibilities, freedom, and free will in the world, and that is an even greater good.
In the end, technology is a type of thinking; a technology is a thought expressed. Not all thoughts or technologies are equal. Clearly, there are silly theories, wrong answers, and dumb ideas. While a military laser and Gandhi's act of civil disobedience are both useful works of human imagination and thus both technological, there is a difference between the two. Some possibilities restrict future choices, and some possibilities are pregnant with other possibilities.
However, the proper response to a lousy idea is not to stop thinking. It is to come up with a better idea. Indeed, we should prefer a bad idea to no ideas at all, because a bad idea can at least be reformed, while not thinking offers no hope.