Bohr’s dictum is equally true in the area of sociable robotics, where things are no less tangled. Roboticists insist that robotic emotions are made up of the same ultimate particles as human ones (because mind is ultimately made of matter), but it is also true that robots’ claims to emotion derive from programs designed to get an emotional rise out of us.
4
Roboticists present, as though it were a first principle, the idea that as our population ages, we simply won’t have enough people to take care of our human needs, and so, as a companion, a sociable robot is “better than nothing.” But what are our first principles? We know that we warm to machines when they seem to show interest in us, when their affordances speak to our vulnerabilities. But we don’t have to say yes to everything that speaks to us in this way. Even if, as adults, we are intrigued by the idea that a sociable robot will distract our aging parents, our children ask, “Don’t we have people for these jobs?” We should attend to their hesitations. Sorting all this out will not be easy. But we are at a crossroads—at a time and place to initiate new conversations.
As I was working on this book, I discussed its themes with a former colleague, Richard, who has been left severely disabled by an automobile accident. He is now confined to a wheelchair in his home and needs nearly full-time nursing care. Richard is interested in robots being developed to provide practical help and companionship to people in his situation, but his reaction to the idea is complex. He begins by saying, “Show me a person in my shoes who is looking for a robot, and I’ll show you someone who is looking for a person and can’t find one,” but then he makes the best possible case for robotic helpers when he turns the conversation to
human
cruelty. “Some of the aides and nurses at the rehab center hurt you because they are unskilled, and some hurt you because they mean to. I had both. One of them, she pulled me by the hair. One dragged me by my tubes. A robot would never do that,” he says. And then he adds, “But you know, in the end, that person who dragged me by my tubes had a story. I could find out about it. She had a story.”
For Richard, being with a person, even an unpleasant, sadistic person, makes him feel that he is still alive. It signifies that his way of being in the world has a certain dignity, even if his activities are radically curtailed. For him, dignity requires a feeling of authenticity, a sense of being connected to the human narrative. It helps sustain him. Although he would not want his life endangered, he prefers the sadist to the robot.
Richard’s perspective is a cautionary tale to those who would speak in too-simple terms of purely technical benchmarks for human and machine interactions. We animate robotic creatures by projecting meaning onto them and are thus tempted to speak of their emotions and even their “authenticity.” We can do this if we focus on the feelings that robots evoke in us. But too often the unasked question is, What does the robot feel? We know what the robot cannot feel: it cannot feel human empathy or the flow of human connection. Indeed, the robot can feel nothing at all. Do we care? Or does the performance of feeling now suffice? Why would we want to be in conversation with machines that cannot understand or care for us? The question was first raised for me by the ELIZA computer program.
5
What made ELIZA a valued interlocutor? What matters were so private that they could only be discussed with a machine?
Over years and with some reluctance, I came to understand that ELIZA’s popularity revealed more than people’s willingness to talk to machines; it revealed their reluctance to talk to other people.
6
The idea of an attentive machine provides the fantasy that we may escape from each other. When we say we look forward to computer judges, counselors, teachers, and pastors, we comment on our disappointments with people who have not cared or who have treated us with bias or even abuse. These disappointments begin to make a machine’s performance of caring seem like caring enough. We are willing to put aside a program’s lack of understanding and, indeed, to work to make it seem to understand more than it does—all to create the fantasy that there is an alternative to people. This is the deeper “ELIZA effect.” Trust in ELIZA does not speak to what we think ELIZA will understand but to our lack of trust in the people who might understand.
Kevin Kelly asks, “What does technology want?” and insists that, whatever it is, technology is going to get it. Accepting his premise, what if one of the things technology wants is to exploit our disappointments and emotional vulnerabilities? When this is what technology wants, it wants to be a symptom.
SYMPTOMS AND DREAMS
Wary of each other, the idea of a robot companion brings a sense of control, of welcome substitution. We allow ourselves to be comforted by unrequited love, for there is no robot that can ever love us back. That same wariness marks our networked lives. There, too, we are vulnerable to a desire to control our connections, to titrate our level of availability. Things progress quickly. A lawyer says sensibly, “I can’t make it to a client meeting; I’ll send notes by e-mail instead.” Five steps later, colleagues who work on the same corridor no longer want to see or even telephone each other and explain that “texts are more efficient” or “I’ll post something on Facebook.”
As we live the flowering of connectivity culture, we dream of sociable robots.
7
Lonely despite our connections, we send ourselves a technological Valentine. If online life is harsh and judgmental, the robot will always be on our side. The idea of a robot companion serves as both symptom and dream. Like all psychological symptoms, it obscures a problem by “solving” it without addressing it. The robot will provide companionship and mask our fears of too-risky intimacies. As dream, robots reveal our wish for relationships we can control.
A symptom carries knowledge that a person fears would be too much to bear. To do its job, a symptom disguises this knowledge so it doesn’t have to be faced day to day.
8
So, it is “easier” to feel constantly hungry than to acknowledge that your mother did not nurture you. It is “easier” to be enraged by a long supermarket line than to deal with the feeling that your spouse is not giving you the attention you crave. When technology is a symptom, it disconnects us from our real struggles.
In treatment, symptoms disappear because they become irrelevant. Patients become more interested in looking at what symptoms hide—the ordinary thoughts and experiences of which they are the strangulated expression. So when we look at technology as symptom and dream, we shift our attention away from technology and onto ourselves. As Henry David Thoreau might ask, “Where do we live, and what do we live for?” Kelly writes of technophilia as our natural state: we love our objects and follow where they lead.
9
I would reframe his insight: we love our objects, but enchantment comes with a price.
The psychoanalytic tradition teaches that all creativity has a cost, a caution that applies to psychoanalysis itself.
10
For psychoanalyst Robert Caper, “The transgression in the analytic enterprise is not that we try to make things better; the transgression is that we don’t allow ourselves to see its costs and limitations.”
11
To make his point Caper revisits the story of Oedipus. As his story is traditionally understood, Oedipus is punished for seeking knowledge—in particular, the knowledge of his parentage. Caper suggests he is punished for something else: his refusal to recognize the limitations of knowledge. A parallel with technology is clear: we transgress not because we try to build the new but because we don’t allow ourselves to consider what it disrupts or diminishes. We are not in trouble because of invention but because we think it will solve everything.
A successful analysis disturbs the field in the interest of long-term gain; it learns to repair along the way.
12
One moves forward in a chastened, self-reflective spirit. Acknowledging limits, stopping to make the corrections, doubling back—these are at the heart of the ethic of psychoanalysis. A similar approach to technology frees us from unbending narratives of technological optimism or despair. Consider how it would modulate Kelly’s argument about technophilia. Kelly refers to Henry Adams, who in 1900 had a moment of rapture when he first set eyes on forty-foot dynamos. Adams saw them as “symbols of infinity, objects that projected a moral force, much as the early Christians felt the cross.”
13
Kelly believes that Adams’s desire to be at one with the dynamo foreshadows how Kelly now feels about the Web. As we have seen, Kelly wants to merge with the Web, to find its “lovely surrender.” Kelly continues,
I find myself indebted to the net for its provisions. It is a steadfast benefactor, always there. I caress it with my fidgety fingers; it yields up my desires, like a lover.... I want to remain submerged in its bottomless abundance. To stay. To be wrapped in its dreamy embrace. Surrendering to the web is like going on aboriginal walkabout. The comforting illogic of dreams reigns. In dreamtime you jump from one page, one thought, to another.... The net’s daydreams have touched my own, and stirred my heart. If you can honestly love a cat, which can’t give you directions to a stranger’s house, why can’t you love the web?
14
Kelly has a view of connectivity as something that may assuage our deepest fears—of loneliness, loss, and death. This is the rapture. But connectivity also disrupts our attachments to things that have always sustained us—for example, the value we put on face-to-face human connection. Psychoanalysis, with its emphasis on the comedy and tragedy in the arc of human life, can help keep us focused on the specificity of human conversation. Kelly is enthralled by the Web’s promise of limitless knowledge, its “bottomless abundance.” But the Oedipal story reminds us that rapture is costly; it usually means you are overlooking consequences.
Oedipus is also a story about the difference between getting what you want and getting what you think you want. Technology gives us more and more of what we think we want. These days, looking at sociable robots and digitized friends, one might assume that what we want is to be always in touch and never alone, no matter who or what we are in touch with. One might assume that what we want is a preponderance of weak ties, the informal networks that underpin online acquaintanceship. But if we pay attention to the real consequences of what we think we want, we may discover what we really want. We may want some stillness and solitude. As Thoreau put it, we may want to live less “thickly” and wait for more infrequent but meaningful face-to-face encounters. As we put in our many hours of typing—with all fingers or just thumbs—we may discover that we miss the human voice. We may decide that it is fine to play chess with a robot, but that robots are unfit for any conversation about family or friends. A robot might have needs, but to understand desire, one needs language and flesh. We may decide that for these conversations, we must have a person who knows, firsthand, what it means to be born, to have parents and a family, to wish for adult love and perhaps children, and to anticipate death. And, of course, no matter how much “wilderness” Kelly finds on the Web, we are not in a position to let the virtual take us away from our stewardship of nature, the nature that doesn’t go away with a power outage.
We let things get away from us. Even now, we are emotionally dependent on online friends and intrigued by robots that, their designers claim, are almost ready to love us.
15
And brave Kevin Kelly says what others are too timid to admit: he is in love with the Web itself. It has become something both erotic and idealized. What are we missing in our lives together that leads us to prefer lives alone together? As I have said, every new technology challenges us, generation after generation, to ask whether it serves our human purposes, something that causes us to reconsider what they are.
In a design seminar, master architect Louis Kahn once asked, “What does a brick want?”
16
In that spirit, if we ask, “What does simulation want?” we know what it wants. It wants—it demands—immersion. But immersed in simulation, it can be hard to remember all that lies beyond it or even to acknowledge that everything is not captured by it. For simulation not only demands immersion but creates a self that prefers simulation. Simulation offers relationships simpler than real life can provide. We become accustomed to the reductions and betrayals that prepare us for life with the robotic.
But being prepared does not mean that we need to take the next step. Sociable robotics puts science into the game of intimacy and the most sensitive moments of children’s development. There is no one to tell science what it cannot do, but here one wishes for a referee. Things start innocently: neuroscientists want to study attachment. But things end reductively, with claims that a robot “knows” how to form attachments because it has the algorithms. The dream of today’s roboticists is no less than to reverse engineer love. Are we indifferent to whether we are loved by robots or by our own kind?
In Philip K. Dick’s classic science fiction story “Do Androids Dream of Electric Sheep” (which most people know through its film adaptation,
Blade Runner
), loving and being loved by a robot seems a good thing. The film’s hero, Deckard, is a professional robot hunter in a world where humans and robots look and sound alike. He falls in love with Rachel, an android programmed with human memories and the knowledge that she will “die.” I have argued that knowledge of mortality and an experience of the life cycle are what make us uniquely human. This brilliant story asks whether the simulation of these things will suffice.