Artificial intelligence is often described as the art and science of “getting machines to do things that would be considered intelligent if done by people.” We are coming to a parallel definition of artificial emotion as the art of “getting machines to express things that would be considered feelings if expressed by people.” Ashley describes the moment of being caught between categories: she realizes that what the robot is “acting out” is not emotion, yet she feels the pull of seeing “the colors” and experiencing AIBO as “upset.” Ashley ends up seeing AIBO as both machine and creature.
So does John Lester, a computer scientist coming from a far more sophisticated starting point. From the early 1990s, Lester pioneered the use of online communities for teaching, learning, and collaboration, including recent work developing educational spaces on the virtual world of Second Life. Lester bought one of the first AIBOs on the market. He called it Alpha in deference to its being “one of the first batch.”
12
When Lester took Alpha out of its box, he shut the door to his office and spent the entire day “hanging out with [my] new puppy.” He describes the experience as “intense,” comparing it to the first time he saw a computer or typed into a Web browser. He quickly mastered the technical aspects of AIBO, but this kind of understanding did not interfere with his pleasure in simply being with the puppy. When Sony modified the robot’s software, Lester bought a second AIBO and named it Beta. Alpha and Beta are machines, but Lester does not like anyone to treat them as inanimate metal and plastic. “I think about my AIBOs in different ways at the same time,” Lester says.
In the early days of cubism, the simultaneous presentation of many perspectives of the human face was subversive. But at a certain point, one becomes accustomed to looking at a face in this new way. A face, after all, does have multiple aspects; only representational conventions keep us from appreciating them together. But once convention is challenged, the new view of the face suggests depth and new complexities. Lester has a cubist view of AIBO; he is aware of it as machine, bodily creature, and mind. An AIBO’s sentience, he says, is “awesome.” The creature is endearing. He appreciates the programming behind the exact swing of the “floppy puppy ears.” To Lester, that programming gives AIBO a mind.
Lester understands the mechanisms that AIBO’s designers have used to draw him in: AIBO’s gaze, its expressions of emotion, and the fact that it “grows up” under his care. But this understanding does not interfere with his attachment, just as knowing that infants draw him in with their big, wide eyes does not threaten his connection with babies. Lester says that when he is with AIBO, he does not feel alone. He says that “from time to time” he “catches himself ” in engineer mode, remarking on a technical detail of AIBO that he admires, but these moments do not pull him away from enjoying the companionship of his AIBO puppies. This is not a connection he plays at.
It is a big step from accepting AIBO as a companion, and even a solace, to the proposals of David Levy, the computer scientist who imagines robots as intimate partners. But today’s fantasies and Levy’s dreams share something important: the idea that after a robot serves as a better-than-nothing substitute, it might become equal, or even preferable, to a pet or person. In Yolanda’s terms, if your pet is a robot, it might always stay a cute puppy. By extension, if your lover were a robot, you would always be the center of its universe. A robot would not just be better than nothing or better than something, but better than anything. From watching children play with objects designed as “amusements,” we come to a new place, a place of cold comforts. Child and adult, we imagine made to measure companions. Or, at least we imagine companions who are always interested in us.
Harry, a forty-two-year-old architect, enjoys AIBO’s company and teaching it new tricks. He knows that AIBO is not aware of him as a person but says, “I don’t feel bad about this. A pet isn’t as aware of me as a person might be.... Dogs don’t measure up to people.... Each level of creature simply does their best. I like it that he [AIBO] recognizes me as his master.” Jane, thirty-six, a grade school teacher, is similarly invested in her AIBO. She says she has “adopted my husband’s AIBO . . . because it is so cute. I named it and love to spend time with it.” Early in our conversation, Jane claims that she turns to AIBO for “amusement,” but she ends up saying that she also turns to it when she is lonely. Jane looks forward to its company after a long workday. Jane talks to her AIBO. “Spend[ing] time” with AIBO means sharing the events of her day, “like who I’m having lunch with at school, which students give me trouble.” Her husband, says Jane, is not interested in these topics. It is more comfortable to talk to AIBO than to force him to listen to stories that bore him. In the company of their robots, Jane and Harry are alone in a way that encourages them to give voice to their feelings. Is there harm here?
In the case of children, I am concerned about their getting comfortable with the idea that a robot’s companionship is even close to a replacement for a person. Later, we will hear teenagers talk about their dread of conversation as they explain why “texting is always better than talking.” Some comment that “sometime, but not now,” it would be good to learn how to have a conversation. The fantasy of robotic companionship suggests that sometime might not have to come. But what of an adult who says he prefers a robot for a reason?
Wesley, sixty-four, provides us with such a case. He has come to see his own self-centeredness as an intractable problem. He imagines a robot helpmate as a way to satisfy himself without hurting others. Divorced three times, Wesley hopes a robot would “learn my psychology. How I get depressed, how I get over it. A robot that could anticipate my cycles, never criticize me over them, learn how to just let me get over them.” Wesley says, “I’d want from the robot a lot of what I want from a woman, but I think the robot would give me more in some ways. With a woman, there are her needs to consider.... That’s the trouble I get into. If someone loves me, they care about my ups and downs. And that’s so much pressure.”
Wesley knows he is difficult to live with. He once saw a psychiatrist who told him that his “cycles” were out of the normal range. Ex-wives, certainly, have told him he is “too moody.” He sees himself as “pressure” on a woman, and he feels pressure as well because he has not been able to protect women he cared for from his “ups and downs.” He likes the idea of a robot because he could act naturally—it could not be hurt by his dark moods. Wesley considers the possibility of two “women,” one real and the other artificial: “Maybe I would want a robot that would be the perfect mate—less needs—
and
a real woman. The robot could take some of the pressure off the real woman. She wouldn’t have to perform emotionally at such a high level, really an unrealistic level.... I could stay in my comfort zone.”
Rudimentary versions of Wesley’s fantasy are in development. I have spoken briefly of the Internet buzz over Roxxxy, put on the market in January 2010, advertised as “the world’s first sex robot.” Roxxxy cannot move, although it has electronically warmed skin and internal organs that pulse. It does, however, make conversation. The robot’s creator, Douglas Hines, helpfully offers, “Sex only goes so far—then you want to be able to talk to the person.”
13
So, for example, when Roxxxy senses that its hand is being held, the robot says, “I love holding hands with you,” and moves into more erotic conversation when the physical caresses become more intimate. One can choose different personalities for Roxxxy, ranging from wild to frigid. The robot will be updated over the Internet to expand its capabilities and vocabulary. It can already discuss soccer.
Hines, an engineer, says that he got into the robot business after a friend died in the September 11 attacks on the Twin Towers. Hines wanted to preserve his friend’s personality so that his children could interact with him as they grew up. Like AI scientist and inventor Raymond Kurzweil, who dreams of a robotic incarnation of his father who died tragically young, Hines committed himself to the project of building an artificial personality. At first, he considered building a home health aid for the elderly but decided to begin with sex robots, a decision that he calls “only marketing.” His long-term goal is to take artificial personalities into the mainstream. He still wants to recreate his lost friend.
The well-publicized launch of Roxxxy elicits a great deal of online discussion. Some postings talk about how “sad” it is that a man would want such a doll. Others argue that having a robot companion is better than being lonely. For example, “There are men for who attaining a real woman is impossible.... This isn’t simply a matter of preference.... In the real world, sometimes second best is all they can get.”
I return to the question of harm. Dependence on a robot presents itself as risk free. But when one becomes accustomed to “companionship” without demands, life with people may seem overwhelming. Dependence on a person is risky—it makes us subject to rejection—but it also opens us to deeply knowing another. Robotic companionship may seem a sweet deal, but it consigns us to a closed world—the loveable as safe and made to measure.
14
Roboticists insist that the artificial can be made unpredictable so that relating to robots will never feel rote or mechanical. Robots, they say, will be surprising, helpful, and meaningful in their own right. Yet, in my interviews, fantasies about robot companions do not dwell on robots full of delightful surprises. Rather, they return, again and again, to how robots might, as Yolanda suggested, be made to order, a safe haven in an unsafe world.
CHAPTER 4
Enchantment
A
little over a year after AIBO’s release, My Real Baby became available in stores. In November 2000, I attended a party at MIT to celebrate its launch. The air was festive: My Real Babies were being handed around liberally to journalists, designers, toy-industry executives, and members of the MIT faculty and their guests.
An editor from
Wired
magazine made a speech at the party, admiring how much advanced technology was now available off the shelf. The robot was impressive, certainly. But it was also surprisingly clunky; its motors whirred as its limited range of facial expressions changed. Engineering students around me expressed disappointment, having hoped for more. As I chatted with one of them, my eyes wandered to a smiling faculty wife who had picked up a My Real Baby and was holding it to her as she would a real child. She had the robot resting over her shoulder, and I noticed her moment of shocked pleasure when the robot burped and then settled down. The woman instinctively kissed the top of My Real Baby’s head and gently massaged its back as she talked with a friend—all of these the timeless gestures of maternal multitasking. Later, as she was leaving, I asked her about the experience. “I loved it,” she said. “I can’t wait to get one.” I asked why. “No reason. It just gives me a good feeling.”
My Real Baby tells you when it is happy and when it wants to play. But it adds a lot more to the mix: it blinks and sucks its thumb; with facial musculature under its skin, it can smile, laugh, frown, and cry. As with all sociable robots, getting along with the robot baby requires learning to read its states of mind. It gets tired and wants to sleep; it gets overexcited and wants to be left alone. It wants to be touched, fed, and have its diaper changed. Over time, My Real Baby develops from infant into two-year-old; baby cries and moans give way to coherent sentences. As it matures, the robot becomes more independent, more likely to assert its needs and preferences. The Tamagotchi primer is followed in the essential: My Real Baby demands care, and its personality is shaped by the care it receives.
Both AIBO and My Real Baby encourage people to imagine robots in everyday life. That is not surprising. After all, these are not extraterrestrials: one is a dog and one is a baby. What is surprising is that time spent with these robots provokes not just fantasies about mutual affection, as we’ve already seen, but the notion that robots will be there to care for us in the sense of
taking
care of us. To put it too simply, conversations about My Real Baby easily lead to musing about a future in which My Real Baby becomes My Real Babysitter. In this, My Real Baby and AIBO are evocative objects—they give people a way to talk about their disappointments with the people around them—parents and babysitters and nursing home attendants—and imagine being served more effectively by robots. When one fifth-grade boy objects that the AIBO before him wouldn’t be useful to an elderly person, he is corrected. His classmates make it clear that they are not talking about AIBO specifically. “AIBO is one, but there will be more.”
The first time I heard this fantasy—children suggesting that the descendants of such primitive robots might someday care for them—I was stunned. But in fact, the idea of robot caretaking is now widespread in the culture. Traditional science fiction, from
Frankenstein
to the Chucky movies, portrays the inanimate coming to life as terrifying. Recently, however, it has also been portrayed as gratifying, nearly redemptive. In
Star Wars
, R2D2 is every child’s dream of a helpmate. In Steven Spielberg’s
A.I.: Artificial Intelligence
, a robot’s love brings hope to a grieving mother. In Disney’s
WALL-E
, a robot saves the planet, but more than this, it saves the people: it reminds them how to love. In
9
, the humans are gone, but the robots that endure are committed to salvaging human values. An emerging mythology depicts benevolent robots.