Read Alone Together Online

Authors: Sherry Turkle

Alone Together (12 page)

There are things, which you cannot tell your friends or your parents, which . . . you could tell an AI. Then it would give you advice you could be more sure of.... I’m assuming it would be programmed with prior knowledge of situations and how they worked out. Knowledge of you, probably knowledge of your friends, so it could make a reasonable decision for your course of action. I know a lot of teenagers, in particular, tend to be caught up in emotional things and make some really bad mistakes because of that.
 
I ask Howard to imagine what his first few conversations with a robot might be like. He says that the first would be “about happiness and exactly what that is, how do you gain it.” The second conversation would be “about human fallibility,” understood as something that causes “mistakes.” From Bruce to Howard, human fallibility has gone from being an endearment to a liability.
No generation of parents has ever seemed like experts to their children. But those in Howard’s generation are primed to see the possibilities for relationships their elders never envisaged. They assume that an artificial intelligence could monitor all of their e-mails, calls, Web searches, and messages. This machine could supplement its knowledge with its own searches and retain a nearly infinite amount of data. So, many of them imagine that via such search and storage an artificial intelligence or robot might tune itself to their exact needs. As they see it, nothing technical stands in the way of this robot’s understanding, as Howard puts it, “how different social choices [have] worked out.” Having knowledge and your best interests at heart, “it would be good to talk to . . . about life. About romantic matters. And problems of friendship.”
Life? Romantic matters? Problems of friendship? These were the sacred spaces of the romantic reaction. Only people were allowed there. Howard thinks that all of these can be boiled down to information so that a robot can be both expert resource and companion. We are at the robotic moment.
As I have said, my story of this moment is not so much about advances in technology, impressive though these have been. Rather, I call attention to our strong response to the relatively little that sociable robots offer—fueled it would seem by our fond hope that they will offer more. With each new robot, there is a ramp-up in our expectations. I find us vulnerable—a vulnerability, I believe, not without risk.
CHAPTER 3
 
True companions
 
I
n April 1999, a month before AIBO’s commercial release, Sony demonstrated the little robot dog at a conference on new media in San Jose, California. I watched it walk jerkily onto an empty stage, followed by its inventor, Toshitado Doi. At his bidding, AIBO fetched a ball and begged for a treat. Then, with seeming autonomy, AIBO raised its back leg to some suggestion of a hydrant. Then, it hesitated, a stroke of invention in itself, and lowered its head as though in shame. The audience gasped. The gesture, designed to play to the crowd, was wildly successful. I imagined how audiences responded to Jacques de Vaucanson’s eighteenth-century digesting (and defecating) mechanical duck and to the chess-playing automata that mesmerized Edgar Alan Poe. AIBO, like these, was applauded as a marvel, a wonder.
1
Depending on how it is treated, an individual AIBO develops a distinct personality as it matures from a fall-down puppy to a grown-up dog. Along the way, AIBO learns new tricks and expresses feelings: flashing red and green eyes direct our emotional traffic; each of its moods comes with its own soundtrack. A later version of AIBO recognizes its primary caregiver and can return to its charging station, smart enough to know when it needs a break. Unlike a Furby, whose English is “destined” to improve as long as you keep it turned on, AIBO stakes a claim to intelligence and impresses with its ability to show what’s on its mind.
If AIBO is in some sense a toy, it is a toy that changes minds. It does this in several ways. It heightens our sense of being close to developing a postbiological life and not just in theory or in the laboratory. And it suggests how this passage will take place. It will begin with our seeing the new life as “as if ” life and then deciding that “as if ” may be life enough. Even now, as we contemplate “creatures” with artificial feelings and intelligence, we come to reflect differently on our own. The question here is not whether machines can be made to think like people but whether people have always thought like machines.
The reconsiderations begin with children. Zane, six, knows that AIBO doesn’t have a “real brain and heart,” but they are “real enough.” AIBO is “kind of alive” because it can function “
as if
it had a brain and heart.” Paree, eight, says that AIBO’s brain is made of “machine parts,” but that doesn’t keep it from being “like a dog’s brain.... Sometimes, the way [AIBO] acted, like he will get really frustrated if he can’t kick the ball. That seemed like a real emotion . . . so that made me treat him like he was alive, I guess.” She says that when AIBO needs its batteries charged, “it is like a dog’s nap.” And unlike a teddy bear, “an AIBO needs its naps.”
As Paree compares her AIBO’s brain to that of a dog, she clears the way for other possibilities. She considers whether AIBO might have feelings like a person, wondering if AIBO “knows its own feelings”—or “if the controls inside know them.” Paree says that people use both methods. Sometimes people have spontaneous feelings and “just become aware” of them (this is “knowing your own feelings”). But other times, people have to program themselves to have the feelings they want. “If I was sad and wanted to be happy”—here Paree brings her fists up close to her ears to demonstrate concentration and intent—“I would have to
make
my brain say that I am set on being happy.” The robot, she thinks, probably has the second kind of feelings, but she points out that both ways of getting to a feeling get you to the same place: a smile or a frown if you are a person, a happy or sad sound if you are an AIBO. Different inner states lead to the same outward states, and so inner states cease to matter. AIBO carries a behaviorist sensibility.
SPARE PARTS
 
Keith, seventeen, is going off to college next year and taking his AIBO with him. He treats the robot as a pet, all the while knowing that it is not a pet at all. He says, “Well, it’s not a pet like others, but it
is
a damn good pet. . . . I’ve taught it everything. I’ve programmed it to have a personality that matches mine. I’ve never let it reset to its original personality. I keep it on a program that lets it develop to show the care I’ve put into it. But of course, it’s a robot, so you have to keep it dry, you have to take special care with it.” His classmate Logan also has an AIBO. The two have raised the robots together. If anything, Logan’s feelings are even stronger than Keith’s. Logan says that talking to AIBO “makes you better, like, if you’re bored or tired or down . . . because you’re actually, like, interacting with something. It’s nice to get thoughts out.”
The founders of artificial intelligence were much taken with the ethical and theological implications of their enterprise. They discussed the mythic resonance of their new science: Were they people putting themselves in the place of gods?
2
The impulse to create an object in one’s own image is not new—think Galatea, Pygmalion, Frankenstein. These days, what is new is that an off-the-shelf technology as simple as an AIBO provides an experience of shaping one’s own companion. But the robots are shaping us as well, teaching us how to behave so that they can flourish.
3
Again, there is psychological risk in the robotic moment. Logan’s comment about talking with the AIBO to “get thoughts out” suggests using technology to know oneself better. But it also suggests a fantasy in which we cheapen the notion of companionship to a baseline of “interacting with something.” We reduce relationship and come to see this reduction as the norm.
As infants, we see the world in parts. There is the good—the things that feed and nourish us. There is the bad—the things that frustrate or deny us. As children mature, they come to see the world in more complex ways, realizing, for example, that beyond black and white, there are shades of gray. The same mother who feeds us may sometimes have no milk. Over time, we transform a collection of parts into a comprehension of wholes.
4
With this integration, we learn to tolerate disappointment and ambiguity. And we learn that to sustain realistic relationships, one must accept others in their complexity. When we imagine a robot as a true companion, there is no need to do any of this work.
The first thing missing if you take a robot as a companion is
alterity
, the ability to see the world through the eyes of another.
5
Without alterity, there can be no empathy. Writing before robot companions were on the cultural radar, the psychoanalyst Heinz Kohut described barriers to alterity, writing about fragile people—he calls them narcissistic personalities—who are characterized not by love of self but by a damaged sense of self. They try to shore themselves up by turning other people into what Kohut calls
self objects
. In the role of selfobject, another person is experienced as part of one’s self, thus in perfect tune with a fragile inner state. The selfobject is cast in the role of what one needs, but in these relationships, disappointments inevitably follow. Relational artifacts (not only as they exist now but as their designers promise they will soon be) clearly present themselves as candidates for the role of selfobject.
If they can give the appearance of aliveness and yet not disappoint, relational artifacts such as sociable robots open new possibilities for narcissistic experience. One might even say that when people turn other people into selfobjects, they are trying to turn a person into a kind of spare part. A robot is already a spare part. From this point of view, relational artifacts make a certain amount of “sense” as successors to the always-resistant human material. I insist on underscoring the “scare quotes” around the word “sense.” For, from a point of view that values the richness of human relationships, they don’t make any sense at all. Selfobjects are “part” objects. When we fall back on them, we are not taking in a whole person. Those who can only deal with others as part objects are highly vulnerable to the seductions of a robot companion. Those who succumb will be stranded in relationships that are only about one person.
This discussion of robots and psychological risks brings us to an important distinction. Growing up with robots in roles traditionally reserved for people is different from coming to robots as an already socialized adult. Children need to be with other people to develop mutuality and empathy; interacting with a robot cannot teach these. Adults who have already learned to deal fluidly and easily with others and who choose to “relax” with less demanding forms of social “life” are at less risk. But whether child or adult, we are vulnerable to simplicities that may diminish us.
GROWING UP AIBO
 
With a price tag of $1,300 to $2,000, AIBO is meant for grown-ups. But the robot dog is a harbinger of the digital pets of the future, and so I present it to children from age four to thirteen as well as to adults. I bring it to schools, to after-school play centers, and, as we shall see in later chapters, to senior centers and nursing homes. I offer AIBOs for home studies, where families get to keep them for two or three weeks. Sometimes, I study families who have bought an AIBO of their own. In these home studies, just as in the home studies of Furbies, families are asked to keep a “robot diary.” What is it like living with an AIBO?
The youngest children I work with—the four- to six-year-olds—are initially preoccupied with trying to figure out what the AIBO is, for it is not a dog and not a doll. The desire to get such things squared away is characteristic of their age. In the early days of digital culture, when they met their first electronic toys and games, children of this age would remain preoccupied with such questions of categories. But now, faced with this sociable machine, children address them and let them drop, taken up with the business of a new relationship.
Maya, four, has an AIBO at home. She first asks questions about its origins (“How do they make it?”) and comes up with her own answer: “I think they start with foil, then soil, and then you get some red flashlights and then put them in the eyes.” Then she pivots to sharing the details of her daily life with AIBO: “I love to play with AIBO every day, until the robot gets tired and needs to take a nap.” Henry, four, follows the same pattern. He begins with an effort to categorize AIBO: AIBO is closest to a person, but different from a person because it is missing a special “inner power,” an image borrowed from his world of Pokémon.
6
But when I see Henry a week later, he has bonded with AIBO and is stressing the positive, all the things they share. The most important of these are “remembering and talking powers, the strongest powers of all.” Henry is now focused on the question of AIBO’s affection: How much does this robot like him? Things seem to be going well: he says that AIBO favors him “over all his friends.”
By eight, children move even more quickly from any concern over AIBO’s “nature” to the pleasures of everyday routines. In a knowing tone, Brenda claims that “people make robots and . . . people come from God or from eggs, but this doesn’t matter when you are playing with the robot.” In this dismissal of origins we see the new pragmatism. Brenda embraces AIBO as a pet. In her robot diary, she reminds herself of the many ways that this pet should not be treated as a dog. One early entry reminds her not to feed it, and another says, “Do
not
take AIBO on walks so it can poop.” Brenda feels guilty if she doesn’t keep AIBO entertained. She thinks that “if you don’t play with it,” its lights get red to show its discontent at “playing by itself and getting all bored.” Brenda thinks that when bored, AIBO tries to “entertain itself.” If this doesn’t work, she says, “it tries to get my attention.” Children believe that AIBO asks for attention when it needs it. So, for example, a sick AIBO will want to get better and know it needs human help. An eight-year-old says, “It would want more attention than anything in the whole world.”

Other books

Cash: The Autobiography by Johnny Cash, Jonny Cash, Patrick Carr
Lighthouse by Alison Moore
No Limits by Michael Phelps
The Unfinished Garden by Barbara Claypole White
Modern Homebrew Recipes by Gordon Strong
Scoop by Rene Gutteridge
Highly Charged! by Joanne Rock
The Sportswriter by Ford, Richard


readsbookonline.com Copyright 2016 - 2024