Read Traffic Online

Authors: Tom Vanderbilt

Traffic (8 page)

One important caveat of the American Late Merge is that it achieved its superior performance in congested conditions—the time, of course, when work-zone merging becomes most problematic. When traffic is flowing freely, there are obvious logistical problems with driving at 75 miles per hour to the end of a lane and then “taking your turn” at the last moment. That is why traffic engineers began working on a refinement, the “Dynamic Late Merge.” This employs “changeable message signs” and flashing warnings that are activated when the traffic volume reaches the point at which late merging would be more desirable. When traffic is light, the signs call for a conventional merge.

But as a Dynamic Late Merge trial undertaken by the Minnesota Department of Transportation on Interstate 35 in the summer of 2003 showed, the best-laid plans of traffic engineers often run aground on the rocky shoals of human behavior. While the experiment was able to reduce the length of queues by 35 percent, it found that vehicle volume through the merge actually decreased.

What happened? It seemed that many drivers, despite the instructions urging them to
USE BOTH LANES,
either did not understand the command or refused it. Only a few drivers in the lane to be closed actually made it to the sign that said, quite plainly,
MERGE HERE.
Some vehicles simply merged early into the “continuous lane,” while others found themselves blocked by trucks and other self-appointed, lane-straddling “traffic cops” who, despite the messages, seemed intent on preserving a single queue—often to the point of aggressively weaving to block a vehicle from passing. Perhaps because they have the most difficulty accelerating and merging at work zones, truck drivers often seem intent on preserving a single queue. Some drivers in the ending lane were observed “pacing” themselves next to a car in the open lane, as if they thought it rude to go faster than anyone else (this was Minnesota, after all, the home of that Paul Bunyan–sized politeness they call “Minnesota Nice”). When this happened, the drivers following them seemed to simply give up and perform an early merge. None of this was what the DOT had in mind, as it bemoaned in a report: “These multiple merging locations created unnecessary disruptions in the traffic flow, slowing vehicles and creating more stop-and-go conditions than necessary.”

The result was that drivers, whether acting out of perceived courtesy or a sense of vigilante justice, thought they were doing the right thing. In fact, they were slowing things down for everyone. One might be willing to forgive the loss in time if they were somehow making the work zone safer or less stressful, but this is not the case; rather, they created confusion by not following instructions or by acting hostile toward those who tried to do so. The Minnesota DOT seemed quite puzzled: “For some unknown reason, a small number of drivers were unwilling to change their old driving behaviors.” Things got better over time—but by then the construction project was finished.

Beyond simple engineering, there seems to be a whole worldview contained in each of the merge strategies that have been tried. The Early Merge strategy implies that people are good. They want to do the right thing. They want to merge as soon as possible, and with as little negotiation as possible. They can eschew temptation in favor of cooperation. The line might be a little longer, but it seems a small price for working toward the common good. The Late Merge strategy suggests that people are not as good, or only as good as circumstances allow. Rather than having people choose among themselves where and when and in front of whom to merge, it picks the spot, and the rules, for them. Late Merge also posits that the presence of that seductively traffic-free space will be too tempting for the average mortal, and so simply removes it. And the conventional merge, the one that most of us seem to find ourselves in each day? This is strictly laissez-faire. It gives people a set of circumstances and only a vague directive of what to do and leaves the rest up to them. This tosses the late mergers and the early mergers together in an unholy tempest of conflicting beliefs, expectations, and actions. Perhaps not surprisingly, it performs the worst of all.

I suggest the following: The next time you find yourself on a congested four-lane road and you see that a forced merge is coming, don’t panic. Do not stop, do not swerve into the other lane. Simply stay in your lane—if there is a lot of traffic, the distribution between both lanes should be more or less equal—all the way to the merge point. Those in the lane that is remaining open should allow one person from the lane to be closed in ahead of them, and then proceed (those doing the merging must take a similar turn). By working together, by abandoning our individual preferences and our distrust of others’ preferences, in favor of a simple set of objective rules, we can make things better for everyone.

Why You’re Not as Good a Driver as You Think You Are

If Driving Is So Easy, Why Is It So Hard for a Robot? What Teaching Machines to Drive Teaches Us About Driving

As you wish, Mr. Knight. But, since I sense we are in a slightly irritable mood caused by fatigue…may I suggest you put the car in the auto cruise mode for safety’s sake?

—K.I.T.T.,
Knight Rider

For those of us who aren’t brain surgeons, driving is probably the most complex everyday thing we do. It is a skill that consists of at least fifteen hundred “subskills.” At any moment, we are navigating through terrain, scanning our environment for hazards and information, maintaining our position on the road, judging speed, making decisions (about twenty per mile, one study found), evaluating risk, adjusting instruments, anticipating the future actions of others—even as we may be sipping a latte, thinking about last night’s episode of
American Idol,
quieting a toddler, or checking voice mail. A survey of one stretch of road in Maryland found that a piece of information was presented every two feet, which at 30 miles per hour, the study reasoned, meant the driver was exposed to 1,320 “items of information,” or roughly 440 words, per minute. This is akin to reading three paragraphs like this one while also looking at lots of pretty pictures, not to mention doing all the other things mentioned above—and then repeating the cycle,
every minute you drive.

Because we seem to do this all so easily, we tend not to dwell on it. Driving becomes like breathing or an involuntary reflex. We just do it. It just happens. But to think anew about this rather astonishing human ability, it’s worth pausing to consider what it actually takes to get a nonhuman to drive. This is a problem that Sebastian Thrun, director of the Artificial Intelligence Laboratory at Stanford University, and his team have dedicated themselves to for the last few years. In 2005, Thrun and his colleagues won the Defense Advanced Research Projects Agency’s Grand Challenge, a 132-mile race through a tortuous course in the Mojave Desert. Their “autonomous vehicle,” a Volkswagen Touareg named Stanley, using only GPS coordinates, cameras, and a variety of sensors, completed the course in just under seven hours, averaging a rather robust 19.1 miles per hour.

Stanley won because Thrun and his team, after a series of failures, changed their method of driving instruction. “We started teaching Stanley much more like an apprentice than a computer,” Thrun told me. “Instead of telling Stanley, ‘If the following condition occurs, invoke the following action,’ we would give an example and train him.” It would not work, for example, to simply tell Stanley to drive at a certain speed limit. “A person would slow down when they hit a rut,” Thrun said. “But a robot is not that smart. It would keep driving at thirty miles per hour until its death.” Instead, Thrun took the wheel and had Stanley record the way he drove, carefully noting his speed and the amount of shock the vehicle was absorbing. Stanley watched how Sebastian responded when the road narrowed, or when the shock level of his chassis went beyond a certain threshold.

Stanley was learning the way most of us learn to drive, not through rote classroom memorization of traffic rules and the viewing of blood-soaked safety films but through real-world observation, sitting in the backseats of our parents’ cars. For Thrun, the process made him begin “questioning what a rule really is.” The basic rules were simple: Drive on this road under this speed limit from this point to this point. But giving Stanley rules that were too rigid would cause him to overreact, like the autistic character played by Dustin Hoffman in the film
Rain Man,
who stops while crossing an intersection because the sign changes to
DO NOT
WALK.
What about when the conventions are violated, as they so often are in driving? “Nothing says that a tumbleweed has to stay outside the drivable corridor,” Thrun explained. In other words, stuff happens. There are myriad moments of uncertainty, or “noise.” In the same way we do things like judge whether the police car with the flashing lights has already pulled someone else over, Stanley needs to decipher the puzzling world of the road: Is that a rock in the middle of the street or a paper bag? Is that a speed bump in the road or someone who fell off their bike? The restrictions on a New York City “No Parking” sign alone would bring Stanley to his knees.

If all this seems complicated enough, now consider doing all of it in the kind of environment in which most of us typically drive: not lonely desert passes but busy city and suburban streets. When I caught up with Thrun, this is exactly what was on his mind, for he was in the testing phase for DARPA’s next race, the Urban Challenge. This time the course would be in a city environment, with off-roading Stanley retired in favor of sensible Junior, a 2006 VW Passat Wagon. The goal, according to DARPA, would be “safe and correct autonomous driving capability in traffic at 20 mph,” including “merging into moving traffic, navigating traffic circles, negotiating busy intersections, and avoiding obstacles.”

We do not always get these things right ourselves, but most drivers make any number of complex maneuvers each day without any trouble. Teaching a machine to do this presents elemental problems. Simply analyzing any random traffic scene, as we constantly do, is an enormous undertaking. It requires not only recognizing objects, but understanding how they relate to one another, not just at that moment but in the future. Thrun uses the example of a driver coming upon a traffic island versus a stationary car. “If there’s a stationary car you behave fundamentally differently, you queue up behind it,” he says. “If it’s a traffic island you just drive around it. Humans take for granted that we can just look at this and recognize it instantly. To take camera data and be able to understand this is a traffic island, that technology just doesn’t exist.” Outside of forty meters or so, Junior, according to Thrun, does not have a clue about what the approaching obstacle is; he simply sees that it is an obstacle.

In certain ways, Junior has advantages over humans, which is precisely why some robotic devices, like adaptive cruise control—which tracks via lasers the distance to the car in front and reacts accordingly—have already begun to appear in cars. When calculating the distance between himself and the car ahead, as with ACC, Junior is much more accurate than we are—to within one meter, according to Michael Montemerlo, a researcher at Stanford. “People always ask if Junior will sense other people’s brake lights,” Montemerlo said. “Our answer is, you don’t really have to. Junior has the ability to measure the velocity of another car very precisely. That will tell you a car’s braking. You actually get their velocity instead of this one bit of information saying ‘I’m slowing down.’ That’s much more information than a person gets.”

Driving involves not just the fidelity of perception but knowing what to do with the information. For Stanley, the task was relatively simple. “It was just one robot out in the desert all by himself,” Montemerlo said. “Stanley’s understanding of the world is very basic, actually just completely geometric. The goal was just to always take the good terrain and avoid the bad terrain. It’s not possible to drive in an urban setting with that limited understanding of the world. You actually have to take and interpret what you’re seeing and exhibit a higher-level understanding.” When we approach a traffic signal that has just gone yellow, for example, we engage in a complex chain of instantaneous processing and decision making: How much longer will the light be yellow? Will I have time (or space) to brake? If I accelerate will I make it, and how fast do I have to go to do so? Will I be struck by the tailgater behind if I slam on the brakes? Is there a red-light camera? Are the roads wet? Will I be caught in the intersection, “blocking the box”?

Engineers call the moment when we’re too close to the amber light to stop and yet too far to make it through without catching some of the red phase the “dilemma zone.” And a dilemma it is. Judging by crash rates, more drivers are struck from the rear when they try to stop for the light, but more serious crashes occur when drivers proceed and are hit broadside by a car entering the intersection. Do you take the higher chance of a less serious crash or the lower chance of a more serious crash? Engineers can make the yellow light last longer, but this reduces the capacity of the intersection—and once word gets out on the generous signal timing, it may just encourage more drivers to speed up and go for it.

Some people have even proposed signs that warn the driver in advance that the light is about to turn amber, a sort of “caution for the caution” that extends what is called the “indecision zone.” But a study in Austria that looked at intersections where the green signal flashes briefly before turning yellow found mixed results: Fewer drivers went through the red light than at intersections without the flashing green, but more drivers stopped
sooner
than necessary. The danger of the latter result was shown in a study of intersections in Israel where the “flashing green” system had been introduced. There were more rear-end collisions at those intersections than at those without the flashing green. The longer the indecision zone, the more cars that are in it, the more decisions about whether or not to go or suddenly stop, and thus the more chances to crash.

In traffic, these sorts of dilemma zones occur all the time. There are no pedestrians present in the Grand Challenge (“Thank God,” said Montemerlo); they would represent a massive problem for Junior. “I’ve thought a lot about what would happen if you let Junior loose in the real world,” Montemerlo said. Driving at Stanford is relatively sedate, but what if there is a pedestrian standing on the curb, just off the crosswalk? As the pedestrian isn’t in the road, he’s not classified as an obstacle. But is he waiting to cross or just standing there? To know this, the robot would somehow have to interpret the pedestrian’s body language, or be trained to analyze eye contact and facial gestures. Even if the robot driver stopped, the pedestrian might need further signals. “The pedestrian is sometimes wary to walk in front of someone even if they have stopped,” Montemerlo said. “Often they wait for the driver to wave, ‘You go first.’” Would you feel comfortable crossing in front of a driverless Terminator?

In some ways, however, a city environment is actually easier than a dusty desert track. “Urban driving is really constrained; there aren’t many things you can do,” said Montemerlo (who has clearly never driven on New York’s FDR Drive). “This is actually how we’re able to drive. We use the rules of the road and road markings to make assumptions about what might happen.”

Traffic is filled with these assumptions: We drive at full speed through the green light because we’re predicting that the other drivers will have stopped; we do not brace for a head-on collision every time a car comes our way in the opposite lane; we zoom over the crest of a hill because we do not think there is an oil truck stopped just on the other side. “We’re driving faster than we would if we couldn’t make these assumptions,” Montemerlo said. What the Stanford team does is encode these assumptions into the 100,000 or so lines of code that make up Junior’s brain, but not with such rigidity that Junior freezes up when something weird happens.

And weird things happen a lot in traffic. Let’s say a traffic signal is broken. David Letterman once joked that traffic signals in New York City are “just rough guidelines,” but everyone has driven up to a signal that was stuck on red. After some hesitation, you probably, and very carefully, went through the red. Or perhaps you came up behind a stalled car. To get around it would involve crossing a double yellow line, normally an illegal act. But you did it, and traffic laws usually account for exceptional circumstances. What about the question of who proceeds first at a four-way stop? Sometimes there is confusion about who arrived first, which produces a brief four-way standoff. Now picture four robot drivers who arrived at the
exact
same moment. If they were programmed to let the person who arrived first go first, two things might happen: They might all go first and collide or they might all sit frozen, the intersection version of a computer crash. So the Stanford team uses complex algorithms to make Junior’s binary logic a bit more human. “Junior tries to estimate what the right time to go is, and tries to wait for its turn,” Montemerlo said. “But if somebody else doesn’t take their turn and enough time passes by, the robot will actually bump itself up the queue.”

The Stanford team found that the best way for Stanley and Junior to learn how to drive was to study how humans drive. But might the robots have anything to teach us? In the very first Grand Challenge, Montemerlo said, Thrun was “always complaining that the robot slowed down too much in turns.” Yet when a graduate student analyzed the race results, he came to the conclusion that the robot could have “cornered like a Ferrari” and still only shaved a few minutes off a seven-hour race—while upping the crash risk. The reason was that most of the course consisted of straight roads. Maintaining the highest average speed over these sections was more important than taking the relatively few turns (the most dangerous parts of the road) at the highest speed possible.

Other books

Not Quite Dead by John MacLachlan Gray
Adam & Eve (Eve's Room) by Love, Lilian
Fail Safe by Eugene Burdick, Harvey Wheeler
Ghosts of Winters Past by Parker, Christy Graham
Pursuit Of Honor by Vince Flynn
A Journeyman to Grief by Maureen Jennings


readsbookonline.com Copyright 2016 - 2024