Read Alone Together Online

Authors: Sherry Turkle

Alone Together (58 page)

7
In the 1980s, the presence of a “programmer” figured in children’s conversations about computer toys and games. The physical autonomy of robots seems to make the question of their historical determination fall out of the conversation. This is crucial in people’s relating to robots as alive on their own account.
Peter H. Kahn and his colleagues performed a set of experiments that studied how children’s attitudes and, crucially, their behavior differed with AIBOs and stuffed doll dogs. When questioned verbally, children reported opinions about AIBO that were similar to their opinions about a stuffed doll dog. But when you look not at what the children say but at what they do, the picture looks very different. Kahn analyzed 2,360 coded interactions. Most dramatically, children playing with AIBO were far more likely to attempt reciprocal behavior (engaging with the robot and expecting it to engage with them in return) than with the stuffed doll dog (683 to 180 occurrences). In the same spirit, half the children in Kahn’s study said that both AIBO and the stuffed doll dog could hear, but children actually gave more verbal direction to AIBO (fifty-four occurrences) than to the stuffed doll dog (eleven occurrences). In other words, when children talk about the lifelike qualities of their dolls, children don’t believe what they say. They do believe what they say about AIBO.
Similarly, children in Kahn’s study were more likely to take action to “animate” the stuffed doll dog (207 occurrences) while they mostly let AIBO animate itself (20 occurrences). Most tellingly, the children were more likely to mistreat the stuffed doll dog than AIBO (184 to 39 occurrences). Relational artifacts, as I stress here, put children on a moral terrain.
Kahn also found evidence that children see AIBO as the “sort of entity with which they could have a meaningful social (human-animal) relationship.” This expresses what I have called simultaneous vision: children see relational artifacts as both machine and creature. They both know AIBO is an artifact and treat it as a dog. See Peter H. Kahn Jr. et al., “Robotic Pets in the Lives of Preschool Children,”
Interaction Studies: Social Behavior and Communication in Biological and Artificial Systems
7, no. 3 (2006): 405-436.
See also a study by Kahn and his colleagues on how people write about AIBO on the Web: Peter H. Kahn Jr., Batya Friedman, and Jennifer Hagman, “Hardware Companions? What Online AIBO Discussion Forums Reveal About the Human-Robotic Relationship,” in
Proceedings of the Conference on Human Factors in Computing Systems
(New York: ACM Press, 2003), 273-280.
8
Older children and adults use access to the AIBO programming code more literally to create an AIBO in their image.
9
I note the extensive and growing literature suggesting that computation (including robots) will, in a short period, enable people to be essentially immortal. The best-known writer in this genre is Raymond Kurzweil who posits that within a quarter of a century, computational power will have hit a point he calls the singularity. It is a moment of “take-off,” when all bets are off as to what computers will be able to do, think, or accomplish. Among the things that Kurzweil believes will be possible after the singularity is that humans will be able to embed themselves in a chip. They could either take an embodied robotic form or (perhaps until this becomes possible) roam a virtual landscape. This virtual self could evolve into an android self when the technology becomes ready. See Raymond Kurzweil,
The Singularity Is Near: When Humans Transcend Biology
(New York: Viking, 2005).
Kurzweil’s work has captured the public imagination. A small sample of attention to the singularity in the media includes “Future Shock—Robots,”
Daily Show with Jon Stewart,
August 24, 2006,
www.thedailyshow.com/watch/wed-august-23
-2006/future -shock-robots (accessed August 10, 2010); the IEEE Spectrum’s special issue
The Singularity: A Special Report
, June 3, 2008; James Temple, “Singularity Taps Students’ Technology Ideas,”
San Francisco Chronicle
, August 28, 2009,
www.sfgate.com/cgi-bin/article.cgi?f=/c/a/2009/08/27/BUUQ19EJIL.DTL&type=tech
(accessed September 2, 2009); Iain Mackenzie, “Where Tech and Philosophy Collide,”
BBC News
, August 12, 2009,
http://news.bbc.co.uk/2/hi/technology/8194854.stm
(accessed September 2, 2009).
One hears echoes of a “transhuman perspective” (the idea that we will move into a new realm by merging with our machines) in children’s asides as they play with AIBO. Matt, nine, reflecting on his AIBO, said, “I think in twenty years from now, if they had the right stuff, they could put a real brain in a robot body.” The idea of practicing for an impossible “perfection” comes to mind when I attend meetings of AIBO users. They come together to show off how they have customized their AIBOs. They reprogram and “perfect them.” The users I speak to spend as much as fifty to sixty hours a week on their hobby. Some are willing to tell me that they spend more time with their AIBOs than with their families. Yet, AIBO is experienced as relaxation. As one thirty-five-year-old computer technician says, “All of this is less pressure than a real dog. No one will die.” Of all the robotic creatures in my studies, the AIBO provoked the most musing about death and the finality of loss.
10
Sherry Turkle,
The Second Self: Computers and the Human Spirit
(1984; Cambridge, MA: MIT Press, 2005), 41.
11
This is a classic use of the defense mechanism known as projective identification, or seeing in someone else what you feel within yourself. So, if a teenager is angry at her prying mother, she may imagine her mother to be hostile. If a wife is angry with an inattentive husband, she may find her husband aggressive. Sometimes these feelings are conscious, but often they are not. Children use play to engage in the projections that can bring unacknowledged feelings to light. “Play is to the child what thinking, planning, and blueprinting are to the adult, a trial universe in which conditions are simplified and methods exploratory, so that past failures can be thought through, expectations tested.” See Erik Erikson,
The Erik Erikson Reader
, ed. Robert Coles (New York: W. W. Norton, 2000), 195-196.
12
Interview. July 2000.
13
Hines says that the robot is “designed to engage the owner with conversation rather than lifelike movement.” See “Roxxxy Sex Robot [PHOTOS]: World’s First ‘Robot Girlfriend’ Can Do More Than Chat,” Huffington Post, January 10, 2010,
www.huffingtonpost.com/2010/01/10/roxxxy-sex-robot-photo-wo_n_417976.html?view=print
(accessed January 11, 2010).
14
Paul Edwards,
The Closed World: Computers and the Politics of Discourse in Cold War America
(Cambridge, MA: MIT Press, 1997).
CHAPTER 4: ENCHANTMENT
 
1
In an interview about the possibilities of a robot babysitter, psychologist Clifford Nass said, “The question is, if robots
could
take care of your children, would you let them? What does it communicate about our society that we’re not making child-care a number-one priority?” I later spoke with Nass about his response to the idea of the nanny bot, and he rephrased with greater emphasis: “The first problem with offering children a robotic babysitter is that you would have to explain to the children why you were considering this. Why were there no people around for the child?” See Brandon Keim, “I, Nanny: Robot Babysitters Pose Dilemma,”
Wired
, December 18, 2008,
www.wired.com/wiredscience/2008/12/babysittingrobo
(accessed May 31, 2010).
2
One article puts this in the context of the Japanese work ethic. See Jennifer Van House Hutcheson, “All Work and No Play,” MercatorNet, May 31, 2007,
www.mercatornet.com/articles/view/all_work_and_no_play
(accessed August 20, 2009). Another reports on an elderly couple who hired actors to portray their grandchildren, but then assaulted them as they wished to assault their true grandchildren. See Pencil Louis, “Elderly Yokohama,”
OurCivilisation.com
,
www.ourcivilisation.com/smartboard/shop/madseng/chap20.htm
(accessed August 20, 2009).
3
Chelsea’s mother, Grace, fifty-one, explains her position: “Active young people are simply not the right companions for old people and infirm people.” She says, “When I bring my daughters to see my mom, I feel guilty. They shouldn’t be here. She isn’t what she was, even not as good as when they were little. . . . I think it’s better they remember her happier, healthier.” Grace had seen Paro in my office and now is intrigued with the idea of bringing the robot to her mother. For Grace, her mother’s immortality depends on now declaring her no longer the person to remember. It is the case that robots seem “easier” to give as companions to people whom society declares “nonpersons.” Grace is not saying that her mother is a nonperson, but her choice of a robotic companion marks the moment when Grace’s mother is no longer the person Grace wants to remember as her mother.
CHAPTER 5: COMPLICITIES
 
1
Rodney A. Brooks, “The Whole Iguana,” in
Robotics Science
, ed. Michael Brady, MIT Press, 1989), 432-456. This was written in response to a two-page challenge by Daniel C. Dennett about lessons to be learned by building a complete system rather than just modules. See Daniel C. Dennett, “Why Not the Whole Iguana?”
Behavioral and Brain Sciences
1 (1978): 103-104.
2
Kismet is programmed to recognize the word “say” and will repeat the word that follows it. So, children trying to teach Kismet its name would instruct, “Say Kismet,” and Kismet would comply, much to their glee. Similarly, children would try to teach Kismet their names by saying, “Say Robert” . . . “Say Evelyn” . . . “Say Mark.” Here, too, it was within Kismet’s technical ability to comply.
3
Cog and Kismet were both built at the MIT Artificial Intelligence Laboratory. Cog has visual, tactile, and kinesthetic sensory systems and is capable of a variety of social tasks, including visually detecting people and salient objects, orienting to visual targets, pointing to visual targets, differentiating between animate and inanimate movement, and performing simple tasks of imitation. Kismet is a robotic head with five degrees of freedom, an active vision platform, and fourteen degrees of freedom in its display of facial expressions. Though the Kismet head sits disembodied on a platform, it is winsome in appearance. It possesses small, mobile ears made of folded paper, mobile lips made from red rubber tubing, and heavily lidded eyes ringed with false eyelashes. Its behaviors and capabilities are modeled on those of a preverbal infant. Kismet gives the impression of looking into people’s eyes and can recognize and generate speech and speech patterns, although to a limited degree.
Much has been written about these two very well-known robots. See Rodney A. Brooks et al., “The Cog Project: Building a Humanoid Robot,” in
Computation for Metaphors, Analogy and Agents
, vol. 1562 of
Springer Lecture Notes in Artificial Intelligence
, ed. C. Nehaniv (New York: Springer-Verlag, 1998), and Rodney Brooks,
Flesh and Machines: How Robots Will Change Us
(New York: Pantheon, 2002). Brian Scassellati did his dissertation work on Cog. See Brian Scassellati,
Foundations for a Theory of Mind for a Humanoid Robot
(PhD diss., Massachusetts Institute of Technology, 2001). Scassellati and Cynthia Breazeal worked together during early stages of the Kismet project, which became the foundation of Breazeal’s dissertation work. See Cynthia Breazeal and Brian Scassellati, “How to Build Robots That Make Friends and Influence People” (paper presented at the IEEE/RSJ International Conference on Intelligent Robots and Systems, Kyongju, Korea, October 17-21, 1999), in
Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems
(
IROS
) (1999), 858-863. Cynthia Breazeal and Brian Scassellati, “Infant-like Social Interactions Between a Robot and a Human Caretaker,”
Adaptive Behavior
8 (2000): 49-74; Cynthia Breazeal, “Sociable Machines: Expressive Social Exchange Between Humans and Robots” (PhD diss., Massachusetts Institute of Technology, 2000); and Cynthia Breazeal,
Designing Sociable Robots
(Cambridge, MA: MIT Press, 2002).
4
Cynthia Breazeal discusses the astronaut project in Claudia Dreifus, “A Conversation with Cynthia Breazeal: A Passion to Build a Better Robot, One with Social Skills and a Smile,”
New York Times
, June 10, 2003,
www.nytimes.com/2003/06/10/science/conversation-with-cynthia-breazeal-passion-build-better-robot-one-with-social.html?pagewanted=all
(accessed September 9, 2009).
5
I cite this student in Sherry Turkle,
The Second Self: Computers and the Human Spirit
(1984; Cambridge, MA: MIT Press, 2005), 271. The full Norbert Weiner citation is “This is an idea with which I have toyed before—that it is conceptually possible for a human being to be sent over a telegraph line.” See Norbert Wiener,
God and Golem, Inc.: A Comment on Certain Points Where Cybernetics Impinges on Religion
(Cambridge, MA: MIT Press, 1966), 36.
6
People drawn to sociable robots seem to hit a wall that has come to be known as the “uncanny valley.” This phrase is believed to have been coined by Masahiro Mori in “The Uncanny Valley,”
Energy
7, no. 4 (1970): 33-35, An English translation by Karl F. MacDorman and Takashi Minato is available at
www.androidscience.com/theuncannyvalley/proceedings2005/uncannyvalley.html
(accessed November 14, 2009).
If one plots a graph with humanlike appearance on the
x
axis and approval of the robot on the
y
axis, as the robot becomes more lifelike, approval increases until the robot becomes too lifelike, at which point approval plummets into a “valley.” When a robot is completely indistinguishable from a human, approval returns. Japanese roboticist Hiroshi Ishiguro thinks he is building realistic androids that are close to climbing out of the uncanny valley. See, for example, Karl F. MacDorman and Hiroshi Ishiguro, “The Uncanny Advantage of Using Androids in Cognitive and Social Science Research,”
Interaction Studies
7, no. 3 (2006): 297-337, and Karl F. MacDorman et al., “Too Real for Comfort: Uncanny Responses to Computer Generated Faces,”
Computers in Human Behavior
25 (2009): 695-710.
Like Ishiguro, roboticist David Hanson aspires to build realistic androids that challenge the notion of the uncanny valley. “We conclude that rendering the social human in all possible detail can help us to better understand social intelligence, both scientifically and artistically.” See David Hanson et al., “Upending the Uncanny Valley,” Association for the Advancement of Artificial Intelligence, May 11, 2005,
www.aaai.org/Papers/Workshops/2005/WS-05-11/WS05-11-005.pdf
(accessed November 14, 2009).

Other books

Adam's List by Ann, Jennifer
Find Big Fat Fanny Fast by Joe Bruno, Cecelia Maruffi Mogilansky, Sherry Granader
Becoming the Alpha by Ivy Sinclair
A Dream to Call My Own by Tracie Peterson


readsbookonline.com Copyright 2016 - 2024