Soon after, Weizenbaum and I were coteaching a course on computers and society at MIT. Our class sessions were lively. During class meetings he would rail against his program’s capacity to deceive; I did not share his concern. I saw ELIZA as a kind of Rorschach, the psychologist’s inkblot test. People used the program as a projective screen on which to express themselves. Yes, I thought, they engaged in personal conversations with ELIZA, but in a spirit of “as if.” They spoke as if someone were listening but knew they were their own audience. They became caught up in the exercise. They thought, I will talk to this program as if it were a person. I will vent; I will rage; I will get things off my chest. More than this, while some learned enough about the program to trip it up, many more used this same inside knowledge to feed ELIZA responses that would make it seem more lifelike. They were active in keeping the program in play.
Weizenbaum was disturbed that his students were in some way duped by the program into believing—against everything they knew to be true—that they were dealing with an intelligent machine. He felt almost guilty about the deception machine he had created. But his worldly students were not deceived. They knew all about ELIZA’s limitations, but they were eager to “fill in the blanks.” I came to think of this human complicity in a digital fantasy as the “ELIZA effect.” Through the 1970s, I saw this complicity with the machine as no more threatening than wanting to improve the working of an interactive diary. As it turned out, I underestimated what these connections augured. At the robotic moment, more than ever, our willingness to engage with the inanimate does not depend on being deceived but on wanting to fill in the blanks.
Now, over four decades after Weizenbaum wrote the first version of ELIZA, artificial intelligences known as “bots” present themselves as companions to the millions who play computer games on the Internet. Within these game worlds, it has come to seem natural to “converse” with bots about a variety of matters, from routine to romantic. And, as it turns out, it’s a small step from having your “life” saved by a bot you meet in a virtual world to feeling a certain affection toward it—and not the kind of affection you might feel toward a stereo or car, no matter how beloved. Meantime, in the physical real, things proceed apace. The popular Zhu Zhu robot pet hamsters come out of the box in “nurturing mode.” The official biography of the Zhu Zhu named Chuck says, “He lives to feel the love.” For the elderly, the huggable baby seal robot Paro is now on sale. A hit in Japan, it now targets the American nursing home market. Roboticists make the case that the elderly need a companion robot because of a lack of human resources. Almost by definition, they say, robots will make things better.
While some roboticists dream of reverse engineering love, others are content to reverse engineer sex.
3
In February 2010, I googled the exact phrase “sex robots” and came up with 313,000 hits, the first of which was linked to an article titled “Inventor Unveils $7,000 Talking Sex Robot.” Roxxxy, I learned, “may be the world’s most sophisticated, talking sex robot.”
4
The shock troops of the robotic moment, dressed in lingerie, may be closer than most of us have ever imagined. And true to the ELIZA effect, this is not so much because the robots are ready but because we are.
In a television news story about a Japanese robot designed in the form of a sexy woman, a reporter explains that although this robot currently performs only as a receptionist, its designers hope it will someday serve as a teacher and companion. Far from skeptical, the reporter bridges the gap between the awkward robot before him and the idea of something akin to a robot wife by referring to the “singularity.” He asks the robot’s inventor, “When the singularity comes, no one can imagine where she [the robot] could go. Isn’t that right? . . . What about these robots after the singularity? Isn’t it the singularity that will bring us the robots that will surpass us?”
The singularity? This notion has migrated from science fiction to engineering. The
singularity
is the moment—it is mythic; you have to believe in it—when machine intelligence crosses a tipping point.
5
Past this point, say those who believe, artificial intelligence will go beyond anything we can currently conceive. No matter if today’s robots are not ready for prime time as receptionists. At the singularity, everything will become technically possible, including robots that love. Indeed, at the singularity, we may merge with the robotic and achieve immortality. The singularity is technological rapture.
As for Weizenbaum’s concerns that people were open to computer psychotherapy, he correctly sensed that something was going on. In the late 1970s, there was considerable reticence about computer psychotherapy, but soon after, opinions shifted.
6
The arc of this story does not reflect new abilities of machines to understand people, but people’s changing ideas about psychotherapy and the workings of their own minds, both seen in more mechanistic terms.
7
Thirty years ago, with psychoanalysis more central to the cultural conversation, most people saw the experience of therapy as a context for coming to see the story of your life in new terms. This happened through gaining insight and developing a relationship with a therapist who provided a safe place to address knotty problems. Today, many see psychotherapy less as an investigation of the meaning of our lives and more as an exercise to achieve behavioral change or work on brain chemistry. In this model, the computer becomes relevant in several ways. Computers can help with diagnosis, be set up with programs for cognitive behavioral therapy, and provide information on alternative medications.
Previous hostility to the idea of the computer as psychotherapist was part of a “romantic reaction” to the computer presence, a sense that there were some places a computer could not and should not go. In shorthand, the romantic reaction said, “Simulated thinking might be thinking, but simulated feeling is not feeling; simulated love is never love.” Today, that romantic reaction has largely given way to a new pragmatism. Computers “understand” as little as ever about human experience—for example, what it means to envy a sibling or miss a deceased parent. They do, however, perform understanding better than ever, and we are content to play our part. After all, our online lives are all about performance. We perform on social networks and direct the performances of our avatars in virtual worlds. A premium on performance is the cornerstone of the robotic moment. We live the robotic moment not because we have companionate robots in our lives but because the way we contemplate them on the horizon says much about who we are and who we are willing to become.
How did we get to this place? The answer to that question is hidden in plain sight, in the rough-and-tumble of the playroom, in children’s reactions to robot toys. As adults, we can develop and change our opinions. In childhood, we establish the truth of our hearts.
I have watched three decades of children with increasingly sophisticated computer toys. I have seen these toys move from being described as “sort of alive” to “alive enough,” the language of the generation whose childhood play was with sociable robots (in the form of digital pets and dolls). Getting to “alive enough” marks a watershed. In the late 1970s and early 1980s, children tried to make philosophical distinctions about aliveness in order to categorize computers. These days, when children talk about robots as alive enough for
specific purposes
, they are not trying to settle abstract questions. They are being pragmatic: different robots can be considered on a case-by-case and context-by-context basis. (Is it alive enough to be a friend, a babysitter, or a companion for your grandparents?) Sometimes the question becomes more delicate: If a robot makes you love it, is it alive?
LIFE RECONSIDERED
In the late 1970s and early 1980s, children met their first computational objects: games like Merlin, Simon, and Speak & Spell. This first generation of computers in the playroom challenged children in memory and spelling games, routinely beating them at tic-tac-toe and hangman.
8
The toys, reactive and interactive, turned children into philosophers. Above all else, children asked themselves whether something programmed could be alive.
Children’s starting point here is their animation of the world. Children begin by understanding the world in terms of what they know best: themselves. Why does the stone roll down the slope? “To get to the bottom,” says the young child, as though the ball had its own desires. But in time, animism gives way to physics. The child learns that a stone falls because of gravity; intentions have nothing to do with it. And so a dichotomy is constructed: physical and psychological properties stand opposed to one another in two great systems. But the computer is a new kind of object: it is psychological and yet a thing. Marginal objects such as the computer, on the lines between categories, draw attention to how we have drawn the lines.
9
Swiss psychologist Jean Piaget, interviewing children in the 1920s, found that they took up the question of an object’s life status by considering its physical movement.
10
For the youngest children, everything that could move was alive, then only things that could move without an outside push or pull. People and animals were easily classified. But clouds that seemed to move on their own accord were classified as alive until children realized that wind, an external but invisible force, was pushing them along. Cars were reclassified as not alive when children understood that motors counted as an “outside” push. Finally, the idea of autonomous movement became focused on breathing and metabolism, the motions most particular to life.
In the 1980s, faced with computational objects, children began to think through the question of aliveness in a new way, shifting from physics to psychology.
11
When they considered a toy that could beat them at spelling games, they were interested not in whether such an object could move on its own but in whether it could think on its own. Children asked if this game could “know.” Did it cheat? Was knowing part of cheating? They were fascinated by how electronic games and toys showed a certain autonomy. When an early version of Speak & Spell—a toy that played language and spelling games—had a programming bug and could not be turned off during its “say it” routine, children shrieked with excitement, finally taking out the game’s batteries to “kill it” and then (with the reinsertion of the batteries) bring it back to life.
In their animated conversations about computer life and death, children of the 1980s imposed a new conceptual order on a new world of objects.
12
In the 1990s, that order was strained to the breaking point. Simulation worlds—for example the Sim games—pulsed with evolving life forms. And child culture was awash in images of computational objects (from Terminators to digital viruses) all shape-shifting and morphing in films, cartoons, and action figures. Children were encouraged to see the stuff of computers as the same stuff of which life is made. One eight-year-old girl referred to mechanical life and human life as “all the same stuff, just yucky computer ‘cy-dough-plasm.’” All of this led to a new kind of conversation about aliveness. Now, when considering computation, children talked about evolution as well as cognition. And they talked about a special kind of mobility. In 1993, a ten-year-old considered whether the creatures on the game SimLife were alive. She decided they were “if they could get out of your computer and go to America Online.”
13
Here, Piaget’s narrative about motion resurfaced in a new guise. Children often imbued the creatures in simulation games with a desire to escape their confines and enter a wider digital world. And then, starting in the late 1990s, digital “creatures” came along that tried to dazzle children not with their smarts but with their sociability. I began a long study of children’s interactions with these new machines. Of course, children said that a sociable robot’s movement and intelligence were signs of its life. But even in conversations specifically about aliveness, children were more concerned about what these new robots might feel. As criteria for life, everything pales in comparison to a robot’s capacity to care.
Consider how often thoughts turn to feelings as three elementary school children discuss the aliveness of a Furby, an owl-like creature that plays games and seems to learn English under a child’s tutelage. A first, a five-year-old girl, can only compare it to a Tamagotchi, a tiny digital creature on an LED screen that also asks to be loved, cared for, and amused. She asks herself, “Is it [the Furby] alive?” and answers, “Well, I love it. It’s more alive than a Tamagotchi because it sleeps with me. It likes to sleep with me.” A six-year-old boy believes that something “as alive as a Furby” needs arms: “It might want to pick up something or to hug me.” A nine-year-old girl thinks through the question of a Furby’s aliveness by commenting, “I really like to take care of it.... It’s as alive as you can be if you don’t eat. . . . It’s not like an animal kind of alive.”
From the beginning of my studies of children and computers in the late 1970s, children spoke about an “animal kind of alive” and a “computer kind of alive.” Now I hear them talk about a “people kind of love” and a “robot kind of love.” Sociable robots bring children to the locution that the machines are alive enough to care and be cared for. In speaking about sociable robots, children use the phrase “alive enough” as a measure not of biological readiness but of relational readiness. Children describe robots as alive enough to love and mourn. And robots, as we saw at the American Museum of Natural History, may be alive enough to substitute for the biological, depending on the context. One reason the children at the museum were so relaxed about a robot substituting for a living tortoise is that children were comfortable with the idea of a robot as both machine and creature. I see this flexibility in seven-year-old Wilson, a bright, engaged student at a Boston public elementary school where I bring robot toys for after-school play. Wilson reflects on a Furby I gave him to take home for several weeks: “The Furby can talk, and it looks like an owl,” yet “I always hear the machine in it.” He knows, too, that the Furby, “alive enough to be a friend,” would be rejected in the company of animals: “A real owl would snap its head off.” Wilson does not have to deny the Furby’s machine nature to feel it would be a good friend or to look to it for advice. His Furby has become his confidant. Wilson’s way of keeping in mind the dual aspects of the Furby’s nature seems to me a philosophical version of multitasking, so central to our twentieth-century attentional ecology. His attitude is pragmatic. If something that seems to have a self is before him, he deals with the aspect of self he finds most relevant to the context.