Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100 (11 page)

There are several ways in which scientists are making this a reality. The first is to create a machine that can convert the spoken word into writing. In the mid-1990s, the first commercially available speech recognition machines hit the market. They could recognize up to 40,000 words with 95 percent accuracy. Since a typical, everyday conversation uses only 500 to 1,000 words, these machines are more than adequate. Once the transcription of the human voice is accomplished, then each word is translated into another language via a computer dictionary. Then comes the hard part: putting the words into context, adding slang, colloquial expressions, etc., all of which require a sophisticated understanding of the nuances of the language. The field is called CAT (computer assisted translation).

Another way is being pioneered at Carnegie Mellon University in Pittsburgh. Scientists there already have prototypes that can translate Chinese into English, and English into Spanish or German. They attach electrodes to the neck and face of the speaker; these pick up the contraction of the muscles and decipher the words being spoken. Their work does not require any audio equipment, since the words can be mouthed silently. Then a computer translates these words and a voice synthesizer speaks them out loud. In simple conversations involving 100 to 200 words, they have attained 80 percent accuracy.

“The idea is that you can mouth words in English and they will come out in Chinese or another language,” says Tanja Schultz, one of the researchers. In the future, it might be possible for a computer to lip-read the person you are talking to, so the electrodes are not necessary. So, in principle, it is possible to have two people having a lively conversation, although they speak in two different languages.

In the future, language barriers, which once tragically prevented cultures from understanding one another, may gradually fall with this universal translator and Internet contact lens or glasses.

Although augmented reality opens up an entirely new world, there are limitations. The problem will not be one of hardware; nor is bandwidth a limiting factor, since there is no limit to the amount of information that can be carried by fiber-optic cables.

The real bottleneck is software. Creating software can be done only the old-fashioned way. A human—sitting quietly in a chair with a pencil, paper, and laptop—is going to have to write the codes, line for line, that make these imaginary worlds come to life. One can mass-produce hardware and increase its power by piling on more and more chips, but you cannot mass-produce the brain. This means that the introduction of a truly augmented world will take decades, until midcentury.

HOLOGRAMS AND 3-D

Another technological advance we might see by midcentury is true 3-D TV and movies. Back in the 1950s, 3-D movies required that you put on clunky glasses whose lenses were colored blue and red. This took advantage of the fact that the left eye and the right eye are slightly misaligned; the movie screen displayed two images, one blue and one red. Since these glasses acted as filters that gave two distinct images to the left and right eye, this gave the illusion of seeing three dimensions when the brain merged the two images. Depth perception, therefore, was a trick. (The farther apart your eyes are, the greater the depth perception. That is why some animals have eyes outside their heads: to give them maximum depth perception.)

One improvement is to have 3-D glasses made of polarized glass, so that the left eye and right eye are shown two different polarized images. In this way, one can see 3-D images in full color, not just in blue and red. Since light is a wave, it can vibrate up and down, or left and right. A polarized lens is a piece of glass that allows only one direction of light to pass through. Therefore, if you have two polarized lenses in your glasses, with different directions of polarization, you can create a 3-D effect. A more sophisticated version of 3-D may be to have two different images flashed into our contact lens.

3-D TVs that require wearing special glasses have already hit the market. But soon, 3-D TVs will no longer require them, instead using lenticular lenses. The TV screen is specially made so that it projects two separate images at slightly different angles, one for each eye. Hence your eyes see separate images, giving the illusion of 3-D. However, your head must be positioned correctly; there are “sweet spots” where your eyes must lie as you gaze at the screen. (This takes advantage of a well-known optical illusion. In novelty stores, we see pictures that magically transform as we walk past them. This is done by taking two pictures, shredding each one into many thin strips, and then interspersing the strips, creating a composite image. Then a lenticular glass sheet with many vertical grooves is placed on top of the composite, each groove sitting precisely on top of two strips. The groove is specially shaped so that, as you gaze upon it from one angle, you can see one strip, but the other strip appears from another angle. Hence, by walking past the glass sheet, we see each picture suddenly transform from one into the other, and back again. 3-D TVs will replace these still pictures with moving images to attain the same effect without the use of glasses.)

But the most advanced version of 3-D will be holograms. Without using any glasses, you would see the precise wave front of a 3-D image, as if it were sitting directly in front of you. Holograms have been around for decades (they appear in novelty shops, on credit cards, and at exhibitions), and they regularly are featured in science fiction movies. In
Star Wars,
the plot was set in motion by a 3-D holographic distress message sent from Princess Leia to members of the Rebel Alliance.

The problem is that holograms are very hard to create.

Holograms are made by taking a single laser beam and splitting it in two. One beam falls on the object you want to photograph, which then bounces off and falls onto a special screen. The second laser beam falls directly onto the screen. The mixing of the two beams creates a complex interference pattern containing the “frozen” 3-D image of the original object, which is then captured on a special film on the screen. Then, by flashing another laser beam through the screen, the image of the original object comes to life in full 3-D.

There are two problems with holographic TV. First, the image has to be flashed onto a screen. Sitting in front of the screen, you see the exact 3-D image of the original object. But you cannot reach out and touch the object. The 3-D image you see in front of you is an illusion.

This means that if you are watching a 3-D football game on your holographic TV, no matter how you move, the image in front of you changes as if it were real. It might appear that you are sitting right at the 50-yard line, watching the game just inches from the football players. However, if you were to reach out to grab the ball, you would bump into the screen.

The real technical problem that has prevented the development of holographic TV is that of information storage. A true 3-D image contains a vast amount of information, many times the information stored inside a single 2-D image. Computers regularly process 2-D images, since the image is broken down into tiny dots, called pixels, and each pixel is illuminated by a tiny transistor. But to make a 3-D image move, you need to flash thirty images per second. A quick calculation shows that the information needed to generate moving 3-D holographic images far exceeds the capability of today’s Internet.

By midcentury, this problem may be resolved as the bandwidth of the Internet expands exponentially.

What might true 3-D TV look like?

One possibility is a screen shaped like a cylinder or dome that you sit inside. When the holographic image is flashed onto the screen, we see the 3-D images surrounding us, as if they were really there.

MIND OVER MATTER

By the end of this century, we will control computers directly with our minds. Like Greek gods, we will think of certain commands and our wishes will be obeyed. The foundation for this technology has already been laid. But it may take decades of hard work to perfect it. This revolution is in two parts: First, the mind must be able to control objects around it. Second, a computer has to decipher a person’s wishes in order to carry them out.

The first significant breakthrough was made in 1998, when scientists at Emory University and the University of Tübingen, Germany, put a tiny glass electrode directly into the brain of a fifty-six-year-old man who was paralyzed after a stroke. The electrode was connected to a computer that analyzed the signals from his brain. The stroke victim was able to see an image of the cursor on the computer screen. Then, by biofeedback, he was able to control the cursor of the computer display by thinking alone. For the first time, a direct contact was made between the human brain and a computer.

The most sophisticated version of this technology has been developed at Brown University by neuroscientist John Donoghue, who has created a device called BrainGate to help people who have suffered debilitating brain injuries communicate. He created a media sensation and even made the cover of
Nature
magazine in 2006.

Donoghue told me that his dream is to have BrainGate revolutionize the way we treat brain injuries by harnessing the full power of the information revolution. It has already had a tremendous impact on the lives of his patients, and he has high hopes of furthering this technology. He has a personal interest in this research because, as a child, he was confined to a wheelchair due to a degenerative disease and hence knows the feeling of helplessness.

His patients include stroke victims who are completely paralyzed and unable to communicate with their loved ones, but whose brains are active. He has placed a chip, just 4 millimeters wide, on top of a stroke victim’s brain, in the area that controls motor movements. This chip is then connected to a computer that analyzes and processes the brain signals and eventually sends the message to a laptop.

At first the patient has no control over the location of the cursor, but can see where the cursor is moving. By trial and error, the patient learns to control the cursor, and, after several hours, can position the cursor anywhere on the screen. With practice, the stroke victim is able to read and write e-mails and play video games. In principle a paralyzed person should be able to perform any function that can be controlled by the computer.

Initially, Donoghue started with four patients, two who had spinal cord injuries, one who’d had a stroke, and a fourth who had ALS (amyotrophic lateral sclerosis). One of them, a quadriplegic paralyzed from the neck down, took only a day to master the movement of the cursor with his mind. Today, he can control a TV, move a computer cursor, play a video game, and read e-mail. Patients can also control their mobility by manipulating a motorized wheelchair.

In the short term, this is nothing less than miraculous for people who are totally paralyzed. One day, they are trapped, helpless, in their bodies; the next day, they are surfing the Web and carrying on conversations with people around the world.

(I once attended a gala reception at Lincoln Center in New York in honor of the great cosmologist Stephen Hawking. It was heartbreaking to see him strapped into a wheelchair, unable to move anything but a few facial muscles and his eyelids, with nurses holding up his limp head and pushing him around. It takes him hours and days of excruciating effort to communicate simple ideas via his voice synthesizer. I wondered if it was not too late for him to take advantage of the technology of BrainGate. Then John Donoghue, who was also in the audience, came up to greet me. So perhaps BrainGate is Hawking’s best option.)

Another group of scientists at Duke University have achieved similar results in monkeys. Miguel A. L. Nicolelis and his group have placed a chip on the brain of a monkey. The chip is connected to a mechanical arm. At first, the monkey flails about, not understanding how to operate the mechanical arm. But with some practice, these monkeys, using the power of their brains, are able to slowly control the motions of the mechanical arm—for example, moving it so that it grabs a banana. They can instinctively move these arms without thinking, as if the mechanical arm is their own. “There’s some physiological evidence that during the experiment they feel more connected to the robots than to their own bodies,” says Nicolelis.

This also means that we will one day be able to control machines using pure thought. People who are paralyzed may be able to control mechanical arms and legs in this way. For example, one might be able to connect a person’s brain directly to mechanical arms and legs, bypassing the spinal cord, so the patient can walk again. Also, this may lay the foundation for controlling our world via the power of the mind.

MIND READING

If the brain can control a computer or mechanical arm, can a computer read the thoughts of a person, without placing electrodes inside the brain?

It’s been known since 1875 that the brain is based on electricity moving through its neurons, which generates faint electrical signals that can be measured by placing electrodes around a person’s head. By analyzing the electrical impulses picked up by these electrodes, one can record the brain waves. This is called an EEG (electroencephalogram), which can record gross changes in the brain, such as when it is sleeping, and also moods, such as agitation, anger,
etc.
The output of the EEG can be displayed on a computer screen, which the subject can watch. After a while, the person is able to move the cursor by thinking alone. Already, Niels Birbaumer of the University of Tübingen has been able to train partially paralyzed people to type simple sentences via this method.

Other books

Dance for the Dead by Thomas Perry
Doctor Who: Ribos Operation by Ian Marter, British Broadcasting Corporation
Bones by Jan Burke
From Now On by Louise Brooks
A Room Swept White by Sophie Hannah
Learning to Dance by Susan Sallis
Lily George by Healing the Soldier's Heart
The Burn Journals by Brent Runyon


readsbookonline.com Copyright 2016 - 2024