These technologies will emerge gradually (I will attempt to delineate the different gradations of nanotechnology as I talk about each of the decades of the twenty-first century in Part III of this book). There is a clear incentive to go down this path. Given a choice, people will prefer to keep their bones from crumbling, their skin supple, their life systems strong and vital. Improving our lives through neural implants on the mental level, and nanotechnology-enhanced bodies on the physical level, will be popular and compelling. It is another one of those slippery slopes—there is no obvious place to stop this progression until the human race has largely replaced the brains and bodies that evolution first provided.
A Clear and Future Danger
Without self-replication, nanotechnology is neither practical nor economically feasible. And therein lies the rub. What happens if a little software problem (inadvertent or otherwise) fails to halt the self-replication? We may have more nanobots than we want. They could eat up everything in sight.
The movie
The Blob
(of which there are two versions) was a vision of nanotechnology run amok. The movie’s villain was this intelligent self-replicating gluttonous stuff that fed on organic matter. Recall that nanotechnology is likely to be built from carbon-based nanotubes, so, like the Blob, it will build itself from organic matter, which is rich in carbon. Unlike mere animal-based cancers, an exponentially exploding nanomachine population would feed on any carbon-based matter. Tracking down all of these bad nanointelligences would be like trying to find trillions of microscopic needles—rapidly moving ones at that—in at least as many haystacks. There have been proposals for nanoscale immunity technologies: good little antibody machines that would go after the bad little machines. The nanoantibodies would, of course, have to scale up at least as quickly as the epidemic of marauding nanomiscreants. There could be a lot of collateral damage as these trillions of machines battle it out.
Now that I have raised this specter, I will try, unconvincingly perhaps, to put the peril in perspective. I believe that it will be possible to engineer self-replicating nanobots in such a way that an
inadvertent,
undesired population explosion would be unlikely. I realize that this may not be completely reassuring, coming from a software developer whose products (like those of my competitors) crash once in a while (but rarely—and when they do, it’s the fault of the operating system!). There is a concept in software development of “mission critical” applications. These are software programs that control a process on which people are heavily dependent. Examples of mission-critical software include life-support systems in hospitals, automated surgical equipment, autopilot flying and landing systems, and other software-based systems that affect the well-being of a person or organization. It is feasible to create extremely high levels of reliability in these programs. There are examples of complex technology in use today in which a mishap would severely imperil public safety. A conventional explosion in an atomic power plant could spray deadly plutonium across heavily populated areas. Despite a near meltdown at Chernobyl, this apparently has only occurred twice in the decades that we have had hundreds of such plants operating, both incidents involving recently acknowledged reactor calamities in the Chelyabinsk region of Russia.
15
There are tens of thousands of nuclear weapons, and none has ever exploded in error.
I admit that the above paragraph is not entirely convincing. But the bigger danger is the intentional hostile use of nanotechnology. Once the basic technology is available, it would not be difficult to adapt it as an instrument of war or terrorism. It is not the case that someone would have to be suicidal to use such weapons. The nanoweapons could easily be programmed to replicate only against an enemy; for example, only in a particular geographical area. Nuclear weapons, for all their destructive potential, are at least relatively local in their effects. The self-replicating nature of nanotechnology makes it a far greater danger.
VIRTUAL BODIES
We don’t always need real bodies. If we happen to be in a virtual environment, then a virtual body will do just fine. Virtual reality started with the concept of computer games, particularly ones that provided a simulated environment. The first was Space War, written by early artificial-intelligence researchers to pass the time while waiting for programs to compile on their slow 1960s computers.
16
The synthetic space surroundings were easy to render on low-resolution monitors: Stars and other space objects were just illuminated pixels.
Computer games and computerized video games have become more realistic over time, but you cannot completely immerse yourself in these imagined worlds, not without some imagination. For one thing, you can see the edges of the screen, and the all too real world that you have never left is still visible beyond these borders.
If we’re going to enter a new world, we had better get rid of traces of the old. In the 1990s the first generation of virtual reality has been introduced in which you don a special visual helmet that takes over your entire visual field. The key to visual reality is that when you move your head, the scene instantly repositions itself so that you are now looking at a different region of a three-dimensional scene. The intention is to simulate what happens when you turn your real head in the real world: The images captured by your retinas rapidly change. Your brain nonetheless understands that the world has remained stationary and that the image is sliding across your retinas only because your head is rotating.
Like most first generation technologies, virtual reality has not been fully convincing. Because rendering a new scene requires a lot of computation, there is a lag in producing the new perspective. Any noticeable delay tips off your brain that the world you’re looking at is not entirely real. The resolution of virtual reality displays has also been inadequate to create a fully satisfactory illusion. Finally, contemporary virtual reality helmets are bulky and uncomfortable.
What’s needed to remove the rendering delay and to boost display resolution is yet faster computers, which we know are always on the way. By 2007, high-quality virtual reality with convincing artificial environments, virtually instantaneous rendering, and high-definition displays will be comfortable to wear and available at computer game prices.
That takes care of two of our senses—visual and auditory. Another high-resolution sense organ is our skin, and “haptic” interfaces to provide a virtual tactile interface are also evolving. One available today is the Microsoft force-feedback joystick, derived from 1980s research at the MIT Media Lab. A force-feedback joystick adds some tactile realism to computer games, so you feel the rumble of the road in a car-driving game or the pull of the line in a fishing simulation. Emerging in late 1998 is the “tactile mouse,” which operates like a conventional mouse but allows the user to feel the texture of surfaces, objects, even people. One company that I am involved in, Medical Learning Company, is developing a simulated patient to help train doctors, as well as enable nonphysicians to play doctor. It will include a haptic interface so that you can feel a knee joint for a fracture or a breast for Jumps.
17
A force-feedback joystick in the tactile domain is comparable to conventional monitors in the visual domain. The force-feedback joystick provides a tactile interface, but it does not totally envelop you. The rest of your tactile world is still reminding you of its presence. In order to leave the real world, at least temporarily, we need a tactile environment that takes over your sense of touch.
So let’s invent a virtual tactile environment. We’ve seen aspects of it in science fiction films (always a good source for inventing the future). We can build a body suit that will detect your own movements as well as provide high resolution tactile stimulation. The suit will also need to provide sufficient force-feedback to actually prevent your movements if you are pressing against a virtual obstacle in the virtual environment. If you are giving a virtual companion a hug, for example, you don’t want to move right through his or her body This will require a force-feedback structure outside the suit, although obstacle resistance could be provided by the suit itself. And since your body inside the suit is still in the real world, it would make sense to put the whole contraption in a booth so that your movements in the virtual world don’t knock down lamps and people in your “real” vicinity Such a suit could also provide a thermal response and thereby allow the simulation of feeling a moist surface—or even immersing your hand or your whole body in water—which is indicated by a change in temperature and a decrease in surface tension. Finally, we can provide a platform consisting of a rotating treadmill device for you to stand (or sit or lie) on, which will allow you to walk or move around (in any direction) in your virtual environment.
So with the suit, the outer structure, the booth, the platform, the goggles, and the earphones, we just about have the means to totally envelop your senses. Of course, we will need some good virtual reality software, but there’s certain to be hot competition to provide a panoply of realistic and fantastic new environments as the requisite hardware becomes available.
Oh yes, there is the sense of smell. A completely flexible and general interface for our fourth sense will require a reasonably advanced nanotechnology to synthesize the wide variety of molecules that we can detect with our olfactory sense. In the meantime, we could provide the ability to diffuse a variety of aromas in the virtual reality booth.
Once we are in a virtual reality environment, our own bodies—at least the virtual versions—can change as well. We can become a more attractive version of ourselves, a hideous beast, or any creature real or imagined as we interact with the other inhabitants in each virtual world we enter.
Virtual reality is not a (virtual) place you need go to alone. You can interact with your friends there (who would be in other virtual reality booths, which may be geographically remote). You will have plenty of simulated companions to choose from as well.
Directly Plugging In
Later in the twenty-first century, as neural implant technologies become ubiquitous, we will be able to create and interact with virtual environments without having to enter a virtual reality booth. Your neural implants will provide the simulated sensory inputs of the virtual environment—and your virtual body—directly in your brain. Conversely, your movements would not move your “real” body, but rather your perceived virtual body. These virtual environments would also include a suitable selection of bodies for yourself. Ultimately, your experience would be highly realistic, just like being in the real world. More than one person could enter a virtual environment and interact with each other. In the virtual world, you will meet other real people and simulated people—eventually, there won’t be much difference.
This will be the essence of the Web in the second half of the twenty-first century. A typical “web site” will be a perceived virtual environment, with no external hardware required. You “go there” by mentally selecting the site and then entering that world. Debate Benjamin Franklin on the war powers of the presidency at the history society site. Ski the Alps at the Swiss Chamber of Commerce site (while feeling the cold spray of snow on your face). Hug your favorite movie star at the Columbia Pictures site. Get a little more intimate at the Penthouse or Playgirl site. Of course, there may be a small charge.
Real Virtual Reality
In the late twenty-first century, the “real” world will take on many of the characteristics of the virtual world through the means of nanotechnology “swarms.” Consider, for example, Rutgers University computer scientist J. Storrs Hall’s concept of “Utility Fog.”
18
Hall’s conception starts with a little robot called a Foglet, which consists of a human-cell-sized device with twelve arms pointing in all directions. At the end of the arms are grippers so that the Foglets can grasp one another to form larger structures. These nanobots are intelligent and can merge their computational capacities with each other to create a distributed intelligence. A space filled with Foglets is called Utility Fog and has some interesting properties.
First of all, the Utility Fog goes to a lot of trouble to simulate its not being there. Hall describes a detailed scenario that lets a real human walk through a room filled with trillions of Foglets and not notice a thing. When desired (and it’s not entirely clear who is doing the desiring), the Foglets can quickly simulate any environment by creating all sorts of structures. As Hall puts it, “Fog city can look like a park, or a forest, or ancient Rome one day and Emerald City the next.”
The Foglets can create arbitrary wave fronts of light and sound in any direction to create any imaginary visual and auditory environment. They can exert any pattern of pressure to create any tactile environment. In this way, Utility Fog has all the flexibility of a virtual environment, except it exists in the real physical world. The distributed intelligence of the Utility Fog can simulate the minds of scanned (Hall calls them “uploaded”) people who are re-created in the Utility Fog as “Fog people.” In Hall’s scenario, “a biological human can walk through Fog walls, and a Fog (uploaded) human can walk through dumb-matter walls. Of course Fog people can walk through Fog walls, too.”
The physical technology of Utility Fog is actually rather conservative. The Foglets are much bigger machines than most nanotechnology conceptions. The software is more challenging, but ultimately feasible. Hall needs a bit of work on his marketing angle: Utility Fog is a rather dull name for such versatile stuff.