The Glass Cage: Automation and Us (29 page)

I
T WAS
a curious speech. The event was the 2013 TED conference, held in late February at the Long Beach Performing Arts Center near Los Angeles. The scruffy guy on stage, fidgeting uncomfortably and talking in a halting voice, was Sergey Brin, reputedly the more outgoing of Google’s two founders. He was there to deliver a marketing pitch for Glass, the company’s “head-mounted computer.” After airing a brief promotional video, he launched into a scornful critique of the smartphone, a device that Google, with its Android system, had helped push into the mainstream. Pulling his own phone from his pocket, Brin looked at it with disdain. Using a smartphone is “kind of emasculating,” he said. “You know, you’re standing around there, and you’re just like rubbing this featureless piece of glass.” In addition to being “socially isolating,” staring down at a screen weakens a person’s sensory engagement with the physical world, he suggested. “Is this what you were meant to do with your body?”
20

Having dispatched the smartphone, Brin went on to extol the benefits of Glass. The new device would provide a far superior “form factor” for personal computing, he said. By freeing people’s hands and allowing them to keep their head up and eyes forward, it would reconnect them with their surroundings. They’d rejoin the world. It had other advantages too. By putting a computer screen permanently within view, the high-tech eyeglasses would allow Google, through its Google Now service and other tracking and personalization routines, to deliver pertinent information to people whenever the device sensed they required advice or assistance. The company would fulfill the greatest of its ambitions: to automate the flow of information into the mind. Forget the autocomplete functions of Google Suggest. With Glass on your brow, Brin said, echoing his colleague Ray Kurzweil, you would no longer have to search the web at all. You wouldn’t have to formulate queries or sort through results or follow trails of links. “You’d just have information come to you as you needed it.”
21
To the computer’s omnipresence would be added omniscience.

Brin’s awkward presentation earned him the ridicule of technology bloggers. Still, he had a point. Smartphones enchant, but they also enervate. The human brain is incapable of concentrating on two things at once. Every glance or swipe at a touchscreen draws us away from our immediate surroundings. With a smartphone in hand, we become a little ghostly, wavering between worlds. People have always been distractible, of course. Minds wander. Attention drifts. But we’ve never carried on our person a tool that so insistently captivates our senses and divides our attention. By connecting us to a symbolic elsewhere, the smartphone, as Brin implied, exiles us from the here and now. We lose the power of presence.

Brin’s assurance that Glass would solve the problem was less convincing. No doubt there are times when having your hands free while consulting a computer or using a camera would be an advantage. But peering into a screen that floats in front of you requires no less an investment of attention than glancing at one held in your lap. It may require more. Research on pilots and drivers who use head-up displays reveals that when people look at text or graphics projected as an overlay on the environment, they become susceptible to “attentional tunneling.” Their focus narrows, their eyes fix on the display, and they become oblivious to everything else going on in their field of view.
22
In one experiment, performed in a flight simulator, pilots using a head-up display during a landing took longer to see a large plane obstructing the runway than did pilots who had to glance down to check their instrument readings. Two of the pilots using the head-up display never even saw the plane sitting directly in front of them.
23
“Perception requires both your eyes and your mind,” psychology professors Daniel Simons and Christopher Chabris explained in a 2013 article on the dangers of Glass, “and if your mind is engaged, you can fail to see something that would otherwise be utterly obvious.”
24

Glass’s display is also, by design, hard to escape. Hovering above your eye, it’s always at the ready, requiring but a glance to call into view. At least a phone can be stuffed into a pocket or handbag, or slipped into a car’s cup holder. The fact that you interact with Glass through spoken words, head movements, hand gestures, and finger taps further tightens its claim on the mind and senses. As for the audio signals that announce incoming alerts and messages—sent, as Brin boasted in his TED talk, “right through the bones in your cranium”—they hardly seem less intrusive than the beeps and buzzes of a phone. However emasculating a smartphone may be, metaphorically speaking, a computer attached to your forehead promises to be worse.

Wearable computers, whether sported on the head like Google’s Glass and Facebook’s Oculus Rift or on the wrist like the Pebble smartwatch, are new, and their appeal remains unproven. They’ll have to overcome some big obstacles if they’re to gain wide popularity. Their features are at this point sparse, they look dorky—London’s
Guardian
newspaper refers to Glass as “those dreadful specs”
25
—and their tiny built-in cameras make a lot of people nervous. But, like other personal computers before them, they’ll improve quickly, and they’ll almost certainly morph into less obtrusive, more useful forms. The idea of wearing a computer may seem strange today, but in ten years it could be the norm. We may even find ourselves swallowing pill-sized nanocomputers to monitor our biochemistry and organ function.

Brin is mistaken, though, in suggesting that Glass and other such devices represent a break from computing’s past. They give the established technological momentum even more force. As the smartphone and then the tablet made general-purpose, networked computers more portable and personable, they also made it possible for software companies to program many more aspects of our lives. Together with cheap, friendly apps, they allowed the cloud-computing infrastructure to be used to automate even the most mundane of chores. Computerized glasses and wristwatches further extend automation’s reach. They make it easier to receive turn-by-turn directions when walking or riding a bike, for instance, or to get algorithmically generated advice on where to grab your next meal or what clothes to put on for a night out. They also serve as sensors for the body, allowing information about your location, thoughts, and health to be transmitted back to the cloud. That in turn provides software writers and entrepreneurs with yet more opportunities to automate the quotidian.

W
E’VE PUT
into motion a cycle that, depending on your point of view, is either virtuous or vicious. As we grow more reliant on applications and algorithms, we become less capable of acting without their aid—we experience skill tunneling as well as attentional tunneling. That makes the software more indispensable still. Automation breeds automation. With everyone expecting to manage their lives through screens, society naturally adapts its routines and procedures to fit the routines and procedures of the computer. What can’t be accomplished with software—what isn’t amenable to computation and hence resists automation—begins to seem dispensable.

The PARC researchers argued, back in the early 1990s, that we’d know computing had achieved ubiquity when we were no longer aware of its presence. Computers would be so thoroughly enmeshed in our lives that they’d be invisible to us. We’d “use them unconsciously to accomplish everyday tasks.”
26
That seemed a pipe dream in the days when bulky PCs drew attention to themselves by freezing, crashing, or otherwise misbehaving at inopportune moments. It doesn’t seem like such a pipe dream anymore. Many computer companies and software houses now say they’re working to make their products invisible. “I am super excited about technologies that disappear completely,” declares Jack Dorsey, a prominent Silicon Valley entrepreneur. “We’re doing this with Twitter, and we’re doing this with [the online credit-card processor] Square.”
27
When Mark Zuckerberg calls Facebook “a utility,” as he frequently does, he’s signaling that he wants the social network to merge into our lives the way the telephone system and electric grid did.
28
Apple has promoted the iPad as a device that “gets out of the way.” Picking up on the theme, Google markets Glass as a means of “getting technology out of the way.” In a 2013 speech, the company’s then head of social networking, Vic Gundotra, even put a flower-power spin on the slogan: “Technology should get out of the way so you can live, learn, and love.”
29

The technologists may be guilty of bombast, but they’re not guilty of cynicism. They’re genuine in their belief that the more computerized our lives become, the happier we’ll be. That, after all, has been their own experience. But their aspiration is self-serving nonetheless. For a popular technology to become invisible, it first has to become so essential to people’s existence that they can no longer imagine being without it. It’s only when a technology surrounds us that it disappears from view. Justin Rattner, Intel’s chief technology officer, has said that he expects his company’s products to become so much a part of people’s “context” that Intel will be able to provide them with “pervasive assistance.”
30
Instilling such dependency in customers would also, it seems safe to say, bring in a lot more money for Intel and other computer companies. For a business, there’s nothing like turning a customer into a supplicant.

The prospect of having a complicated technology fade into the background, so it can be employed with little effort or thought, can be as appealing to those who use it as to those who sell it. “When technology gets out of the way, we are liberated from it,” the
New York Times
columnist Nick Bilton has written.
31
But it’s not that simple. You don’t just flip a switch to make a technology invisible. It disappears only after a slow process of cultural and personal acclimation. As we habituate ourselves to it, the technology comes to exert more power over us, not less. We may be oblivious to the constraints it imposes on our lives, but the constraints remain. As the French sociologist Bruno Latour points out, the invisibility of a familiar technology is “a kind of optical illusion.” It obscures the way we’ve refashioned ourselves to accommodate the technology. The tool that we originally used to fulfill some particular intention of our own begins to impose on us its intentions, or the intentions of its maker. “If we fail to recognize,” Latour writes, “how much the use of a technique, however simple, has displaced, translated, modified, or inflected the initial intention, it is simply because we have
changed the end in changing the means
, and because, through a slipping of the will, we have begun to wish something quite else from what we at first desired.”
32

The difficult ethical questions raised by the prospect of programming robotic cars and soldiers—who controls the software? who chooses what’s to be optimized? whose intentions and interests are reflected in the code?—are equally pertinent to the development of the applications used to automate our lives. As the programs gain more sway over us—shaping the way we work, the information we see, the routes we travel, our interactions with others—they become a form of remote control. Unlike robots or drones, we have the freedom to reject the software’s instructions and suggestions. It’s difficult, though, to escape their influence. When we launch an app, we ask to be guided—we place ourselves in the machine’s care.

Look more closely at Google Maps. When you’re traveling through a city and you consult the app, it gives you more than navigational tips; it gives you a way to think about cities. Embedded in the software is a philosophy of place, which reflects, among other things, Google’s commercial interests, the backgrounds and biases of its programmers, and the strengths and limitations of software in representing space. In 2013, the company rolled out a new version of Google Maps. Instead of providing you with the same representation of a city that everyone else sees, it generates a map that’s tailored to what Google perceives as your needs and desires, based on information the company has collected about you. The app will highlight nearby restaurants and other points of interest that friends in your social network have recommended. It will give you directions that reflect your past navigational choices. The views you see, the company says, are “unique to you, always adapting to the task you want to perform right this minute.”
33

That sounds appealing, but it’s limiting. Google filters out serendipity in favor of insularity. It douses the infectious messiness of a city with an algorithmic antiseptic. What is arguably the most important way of looking at a city, as a public space shared not just with your pals but with an enormously varied group of strangers, gets lost. “Google’s urbanism,” comments the technology critic Evgeny Morozov, “is that of someone who is trying to get to a shopping mall in their self-driving car. It’s profoundly utilitarian, even selfish in character, with little to no concern for how public space is experienced. In Google’s world, public space is just something that stands between your house and the well-reviewed restaurant that you are dying to get to.”
34
Expedience trumps all.

Other books

14bis Plum Spooky by Janet Evanovich
Glimmer by Anya Monroe
My Heart Will Find Yours by Linda LaRoque
Paris or Bust!: Romancing Roxanne?\Daddy Come Lately\Love Is in the Air by Kate Hoffmann, Jacqueline Diamond, Jill Shalvis
Dip It! by Rick Rodgers
Made of Stars by Kelley York
Portrait of a Disciplinarian by Aishling Morgan


readsbookonline.com Copyright 2016 - 2024