Authors: Tom Vanderbilt
But human psychology has a way of rearing its complicated head. One problem is that you can never quite know how people will react. In one study, researchers assembled a panel of drivers who regularly commuted on U.S. 101 in California’s Silicon Valley. Following a multiple-vehicle crash that took over a half hour to clear, causing extensive delays, the researchers interviewed the commuters. They found that only half the drivers had heard about the incident, and that even the majority of those drivers simply headed to work at the normal time, the normal way. Many people simply seemed unconvinced that they could save any time by changing their plans.
We have all had these moments. Do I take the local streets when I see there is a crash ahead? Is it better to leave early on Sunday morning to go back to the city, or will everyone else have that same idea? Do I get in the right lane because it seems empty, or is there a reason no one else is in it? It boils down to how we make decisions when we do not have all the facts. We rely instead on heuristics, those little strategies and mental shortcuts we all have in our head: Well, this road is usually busy for only a few minutes, so I’ll stay on it. Or: I bet that since the radio called for snow there will not be many people at the mall. We use our experience; we make predictions.
This recalls the famous “El Farol problem,” sketched by the economist W. Brian Arthur, after a bar in Albuquerque, New Mexico. The hypothetical scenario imagines that one hundred people would like to go to the bar to listen to live music, but it seems too crowded if more than sixty show up. How does any one person decide whether or not to go? If they go one night and it’s too crowded, do they return the next night, on the thought that people will have been discouraged—or will others have precisely the same thought? Arthur found, in a simulation, that the mean attendance did indeed hover around sixty, but that the attendance numbers for each night continued to oscillate up and down, for the full one hundred weeks of the trial. Which means that one’s chances of going on the right night are essentially random, as people continue to try to adapt their behavior.
This kind of equilibrium problem happens frequently in traffic, even when people have some information. In 2006, for example, the Dan Ryan Expressway in Chicago was undergoing massive repairs. The first day that eight express lanes were closed, traffic moved surprisingly well. The recommended detours moved more slowly than the highway. This was reported on the news. You can guess what happened on Tuesday: More people flocked to the highway. We can surmise that the expressway’s traffic went down on Wednesday, though it may equally likely have gone up.
What happens when we no longer have to guess? We are now just at the beginning stages of a revolution in traffic, as navigation devices, increasingly often equipped with real-time traffic information, enter the market. The navigation part alone has important consequences for traffic. Studies have shown that drivers on unfamiliar roads are roughly 25 percent less efficient than they should be—that is, they are lost—and that their total mileage could be cut by 2 percent if they were always shown the best routes. Logistics software now helps cut delivery times and fuel emissions for UPS and other truck fleets simply by finding ways to avoid, when possible, time-consuming left turns in two-way traffic. But the biggest change will occur when each driver can always know which roads are crowded and what alternate routes would be best—not through guesses but through accurate real-time data.
In theory, this will help reduce inefficiency in the system. Drivers are told there has been a crash ahead, and their in-car device gives them another route that is estimated to save ten minutes. But nothing in traffic is ever so simple.
The first problem is that real-time data is not yet what its name promises. At Seattle’s Inrix, for example, one of the key providers of traffic information, traffic-pattern data is gathered from a variety of sources, current and historical—from loops to probes on commercial vehicles to the schedule of conventions in Las Vegas, some five billion “data points”—and weighted according to its perceived accuracy and age. “So a thirteen-minute-old traffic speed estimate from a Caltrans sensor in the Los Angeles market would get a sub-five-percent weighting in our estimate of current conditions,” explains Oliver Downs, Inrix’s principal research scientist. Inrix estimates the current conditions every minute, but as Downs notes, “it’s a 3.7-minute-old estimate of the current conditions.” Customers, meanwhile, get a new feed every five minutes. “When we say ‘real-time,’” Downs says, this means “less than five minutes.” That might not seem like much, but, as Downs admits, “it’s long relative to how quickly things can change on the roadway.”
The other problem comes in how people will use that information, or what you should tell them to do based on the information. Michael Schreckenberg, the German physicist known as the “jam professor,” has worked with officials in North Rhine–Westphalia in Germany to provide real-time information, as well as “predictive” traffic forecasts. Like Inrix, if less extensively, they have assembled some 360,000 “fundamental diagrams,” or precise statistical models of the flow behavior of highway sections. They have a good idea of what happens on not only a “normal” day but on all the strange variations: weeks when a holiday falls on Wednesday, the first day there is ice on the road (most people, he notes, will not have yet put on winter tires), the first day of daylight savings time, when a normally light morning trip may occur in the dark.
This kind of information, along with the data gathered from various loops and sensors, can be used to make precise forecasts about what traffic will be like not only on “normal” days but when crashes or incidents occur. There is, however, a problem: Does the forecast itself change the way people will behave, thus changing the forecast? As the economist Tim Harford notes about Wall Street forecasting, if everyone knew today that a stock was going to rise tomorrow, everyone would buy the stock today—thus making it so expensive it could no longer rise tomorrow.
Shreckenberg calls this the “self-destroying prognosis.” In his office at the University of Duisburg-Essen, he points to a highway map with its roads variously lit up in free-flowing green or clogged red. “The prognosis says that this road becomes worse in one hour,” he says. “Many people look at that and say, ‘Oh, don’t use the A3.’ Then they go somewhere else. The jam will not occur since everyone turned to another way. This is a problem.” These sorts of oscillations could happen with even short lags in information, in what Shreckenberg calls the “ping-pong effect.” Imagine there are two routes. Drivers are told that one is five minutes faster. Everyone shifts to that route. By the time the information is updated, the route that everyone got on is now five minutes
slower.
The other road now becomes faster, but it quickly succumbs to the same problem.
This raises a question: Has the information provided actually helped drivers or the system as a whole—or has it triggered the “selfish routing” mentioned before? Moshe Ben-Akiva, the director of MIT’s Intelligent Transportation Systems program, has studied such travel behavior issues for decades. He calls traffic predictions a “chicken-and-egg problem.” “The correct prediction must take into account how people are going to respond to the prediction,” he says. “You cannot predict what will happen tomorrow without taking into account how people are going to respond to the prediction once the prediction is broadcast.”
And so researchers create models that anticipate how people will behave, based on how they have behaved in the past. Shreckenberg, in Germany, wonders if this means, in essence, not giving drivers the whole picture. “You have to structure the information. What you want is for the people to do certain things. Telling them the whole truth is not the best way.” This is something on the minds of the big commercial providers of traffic information. As Howard Hayes, vice president of NAVTEQ, said at the firm’s headquarters in Chicago, “What happens if once this really good predictive traffic information becomes available, everyone starts getting shunted over to a different direction, which itself becomes jammed? Ideally you need something sophisticated, so that a certain number of people get shunted to one route and others to another.”
Since the information is still so limited, and since so few people actually have access to it, we do not really know how it will all play out once everyone is able to know the traffic conditions on every road in a network. Most simulations have shown that more drivers having more real-time information—the closer to actual real-time, the better—can reduce travel times and congestion. Even drivers
without
real-time information can benefit, it is argued, because better-informed drivers will exit crowded roads, thus making those roads less crowded for uninformed drivers stuck in traffic. But as you might expect, studies suggest that the benefit for any one driver with access to real-time information drops as more people have it. This is, in essence, the death of the shortcut. The more people know the best routes at all times, the less chance of there being some gloriously underutilized road. This is good for all drivers (i.e., the “system”) but less good, say, for the savvy taxi driver.
Real-time traffic and routing is most valuable, it has been suggested, during nonrecurring congestion. When a road that is normally not crowded is backed up because of a crash, it’s useful to know of better options. During recurring congestion, however, those peak-hour jams that result from too many people going to the same place at once, the advantage shrivels once the tipping point of congestion has been passed. (It is most effective right on the brink, when alternative routes are on the verge of drying up.) In a traffic system that is always congested, any good alternative routes will have already been discovered by other drivers.
Another shortcoming of real-time routing is due to a curious fact about urban road networks. As a group of researchers observed after studying traffic patterns and road networks in the twenty largest cities in Germany, roads follow what’s called a “power law”—in other words, a small minority of roads carry a huge majority of the traffic. In Dresden, for example, while 50 percent of the total road length carried hardly any traffic at all (0.2 percent), 80 percent of the total traffic ran on less than 10 percent of the roads. The reason is rather obvious: Most drivers tend to drive on the largest roads, because they are the fastest. Even though they may have slowed due to congestion, they are still fastest. Traffic engineers, having built the roads, are generally aware of this fact, and would rather have you stay on the road that was designed for heavy use, instead of engaging in widespread “rat runs” that play havoc with local roads.
Both the promise and the limits of real-time traffic and routing information were demonstrated to me one day as I drove on Interstate 95 in Connecticut, using real-time traffic information provided by TeleNav via a Motorola mobile phone. The phone had been cheerily giving directions, even offering an evolving estimated time of arrival. Suddenly, an alert sounded: Congestion ahead. I queried the system for the best alternate route. It quickly drew one up, then delivered the bad news: It would take
longer
than the route I was on. The road I was on, congested or not, was still the best.
Real-time traffic and routing information and congestion pricing are two sides of the same coin. One tells drivers how to avoid traffic congestion; the other
impels
drivers to avoid traffic congestion. When the roads are congested, real-time information does little good, except to tell drivers, like the people in line for Disney World’s Space Mountain, how long they can expect to wait. This alone may be enough of a social good. But real-time congestion information, provided by the very cars generating that congestion, promises something else. It can be used to calculate the exact demand for any stretch of road at any time. With congestion pricing, the traffic on the roads will finally be made to act like the traffic in things, with market prices reflecting and shaping the supply and demand.
When Dangerous Roads Are Safer
An overturned cart is a warning to oncoming drivers.
—Chinese proverb
Just before dawn on Sunday, September 3, 1967, there was an unusually festive air in the streets of Stockholm. Cars honked, passersby cheered, people gave flowers to police officers, pretty girls smiled from the curb. The streets were clogged with cars, many of which had been waiting for hours to participate in a historic traffic jam. People stole bicycles simply to be a part of traffic. At the moment the bells chimed for six o’clock, Swedes began driving on the right.
It had taken years of debate, and much preparation, to get to this point. Motions to switch from left-side driving had been raised in Parliament several times in previous decades, only to be shot down. The issue was put before Swedes in a 1955 referendum, but the measure was overwhelmingly defeated. Undeterred, backers of right-side driving finally got a measure approved by the government in 1963.
Proponents said that driving on the right, as was the practice in the rest of Scandinavia and the bulk of Europe, would lower the number of accidents in which foreigners were increasingly becoming involved. Most cars in use already had steering wheels on the left side. Those opposed, which was most of Sweden, grumbled about the huge costs of the changeover, and said that accident rates were bound to rise.
As “H-Day” (after
höger,
the Swedish word for “right”) approached, the predictions of ensuing chaos and destruction grew dire. “What is going to happen here in September has cast many grotesque shadows all over Sweden,” the
New York Times
observed darkly. This despite four years of preparation and an especially energetic blitz of public-service announcements in the final year before the changeover. There was even a pop song, titled “Håll dej till Höger, Svensson!” or “Let’s All Drive on the Right, Svensson!” (after a stereotypically common Swedish surname).
And what happened when Swedes started driving on the other side of the road, many for the first time in their lives? The roads got safer. On the Monday after the change, the traffic commissioner reported a below-average number of accidents. True, this may have been anticipated, despite the gloomy predictions. For one, many Swedes, scared witless of the spectacle, undoubtedly chose not to drive, or drove less. For another, a special speed limit, which had already been in place for some months before the changeover, was enforced: 40 kilometers per hour in towns, 60 on open roads, 90 on highways. Lastly, the whole operation was run with Scandinavian efficiency and respect for the law. This was the country that gave the world Volvo, by God—how could it not be safe?
Remarkably, it was not just for a few days, or even weeks, after the changeover that Sweden’s roads were safer. It took a year before the accident rate returned to what it had been the year before the changeover. This raises the question of whether the changeover actually achieved anything in the long run for safety, but in the short term, when one might have predicted an increase in accidents as an entire nation went through the learning curve of right-hand driving, Sweden actually became safer. Faced with roads that had overnight theoretically become more dangerous, Swedes were behaving differently. Studies of drivers showed they were less likely to overtake another when a car was approaching in the oncoming lane, while pedestrians were looking for longer gaps in the traffic before choosing to cross.
Had Sweden’s roads actually become more dangerous? They were the same roads, after all, even if drivers were driving on a new side. What had changed was that the roads
felt
less safe to Swedish drivers, and they seemed to react with more caution.
Most people have probably had similar moments. Think about a roundabout, quite common in Europe but still rare to these shores. For many Americans they are frightening places, their intimidation factor perhaps best captured by the plight of the hapless Griswold clan in
National Lampoon’s European Vacation,
who, having entered a London traffic circle, find that they cannot leave. They orbit endlessly, locked in a traffic purgatory, until night closes in, the family has fallen asleep, and the father is babbling uncontrollably. Whether this rings true or not, it must be pointed out that the much-maligned traffic circle is not the same thing as a roundabout. A traffic circle varies in a number of ways, most notably in that cars already in the circle must often yield to cars entering the circle. Traffic circles are also larger, and cars enter at a much higher speed, which makes for less efficient merging. They may also rely on traffic signals. In roundabouts, which are free of signals, cars entering must yield to those already in the circle. We have already seen that roundabouts can be more efficient, but it may surprise you to learn that modern roundabouts are also much safer than a conventional intersection with traffic lights.
The first reason has to do with their design. Intersections are crash magnets—in the United States, 50 percent of all road crashes occur at intersections. At a four-way intersection, there are a staggering fifty-six potential points of what engineers call “conflict,” or the chance for you to run into someone—thirty-two of these are places where vehicles can hit vehicles, and twenty-four are spots where vehicles can hit pedestrians.
Roundabouts sharply drop the total number of potential conflicts to sixteen, and, thanks to their central islands (which create what engineers call “deflection”), they eliminate entirely the two most dangerous moves in an intersection: crossing directly through the intersection, often at high speed (the average speed in most roundabouts is
half
that of conventional intersections, which increases safety for surrounding pedestrians), and making a left turn. This little action involves finding a suitable gap in oncoming traffic—often as one’s view is blocked by an oncoming car waiting to make its own left turn—and then, as your attention may still be divided, making sure you do not hit a pedestrian in the crosswalk you are entering as you whisk through your turn. One study that looked at twenty-four intersections that had been converted from signals and stop signs to roundabouts found that total crashes dropped nearly 40 percent, while injury crashes dropped 76 percent and fatal crashes by about 90 percent.
There is a paradox here: The system that many of us would feel is more dangerous is actually safer, while the system we think is safer is actually more dangerous. This points to a second, more subtle factor in why roundabouts are safer. Intersections of any kind are complex environments for the driver, requiring high amounts of mental workload to process things like signs, other cars, and turning movements. Drivers approaching an intersection with a green light may feel there is little left for them to do; they have the green light. But traffic lights have pernicious effects in and of themselves, as Kenneth Todd, a retired engineer in Washington, D.C., has pointed out. The desire to “catch” a green makes drivers speed up at precisely the moment they should be looking for vehicles making oncoming turns or entering the main road from a right turn on red. The high placement of traffic lights also puts drivers’ eyes upward, away from the street and things like the brake lights of the slowing cars they are about to hit. Then there are the color-blind drivers who cannot make out the red versus green, and the moments when sunlight washes out the light for everyone.
With a roundabout, only a fool would blindly sail into the scrum at full speed. Drivers must adjust their speed, scan for openings, negotiate the merge. This requires more workload, which increases stress, which heightens the feeling of danger. This is not in itself a bad thing, because intersections are, after all, dangerous places. The system that makes us more aware of this is actually the safer one.
Once, on a driving trip in rural Spain, I decided to take a shortcut. On the map, it looked like a good idea. The road turned out to be a climbing, twisting, broken-asphalt nightmare of blind hairpin turns. There were few guardrails, just vertigo-inducing drops into distant gulleys. The few signs there were told me what I already knew:
PELIGRO
. Danger. And how did I drive? Incredibly slowly, with both hands locked on the wheel, eyes boring straight ahead. I honked ahead of every blind curve. My wife, who fears both heights and head-on collisions, never trusted me with a Spanish map again.
Was the road dangerous or safe? On the one hand, it was incredibly dangerous. The “sight distances,” as road engineers call the span required for one to see a problem and safely react to it (based on a certain travel speed), were terrible. The lanes were narrow and not always marked. There was only the occasional warning sign. Had there been a collision, there was little to keep me from tumbling off the edge of the road. And so I drove as if my life depended on it. Now picture another road in Spain, the nice four-lane highway we took from the airport down to Extremadura. There was little traffic, no police, and I was eager to get to our hotel. I drove at a healthy pace, because it felt safe: a smooth, flat road with gentle curves and plenty of visibility. The sun was shining; signs alerted me to every possible danger. And what happened? Grown briefly tired from the monotony of the highway (drivers have a greater chance of becoming drowsy on roads with less traffic and on divided highways free of junctions) and the glare of the sun, I just about fell asleep and ran off the road. Was this road dangerous or safe?
Of the two roads, the highway was of course the more objectively safe. It is well known that limited-access highways are among the safest roads we travel. There is little chance of a head-on collision, cars move at relatively the same speeds, medians divide opposing traffic streams, curves are tamed and banked with superelevation to correct drivers’ mistakes, there are no bikes or pedestrians to scan for, and even if I had started to nod off I would have been snapped back to attention with a “sonic nap alert pattern,” or what you might call a rumble strip. At the worst extreme, a guardrail may have kept me from running off the road or across the median, and if it was one of the high-tension cable guardrails, like the Brifen wire-rope safety fence, increasingly showing up from England to Oklahoma, it might have even kept me from bouncing back into traffic.
Those rumble strips are an element of what has been called the “forgiving road.” The idea is that roads should be designed with the thought that people will make a mistake. “When that happens it shouldn’t carry a death sentence,” as John Dawson, the head of the European Road Assessment Programme, explained it to me. “You wouldn’t allow it in a factory, you wouldn’t allow it in the air, you wouldn’t allow it with products. We do allow it on the roads.”
This struck me as a good and fair idea, and yet something nagged at the back of my brain: I couldn’t help but think that of the two roads, it was the safer one on which I had almost met my end. Lulled by safety, I’d acted more dangerously. This may seem like a simple, even intuitive idea, but it is actually an incredibly controversial one—in fact, heretical to some. For years, economists, psychologists, road-safety experts, and others have presented variations on this theory, under banners ranging from “the Peltzman effect” and “risk homeostasis,” to “risk compensation” and the “offset hypothesis.” What they are all saying, to crudely lump all of them together, is that we change our behavior in response to perceived risk (an idea I will explore more fully in Chapter 9), without even being aware that we are doing so.
As my experience with the two roads in Spain suggested, the question is a lot more subtle and complicated than merely “Is this a dangerous or safe road?” Roads are also what we make of them. This fact is on the minds of engineers with the Federal Highway Administration’s Turner-Fairbank Highway Research Center, located in Langley, Virginia, just next to the Central Intelligence Agency.
The first thing to think about is, What is a road telling you, and how? The mountain road in Spain did not need speed-limit signs, because it was plainly evident that going fast was not a good idea. This is an extreme version of what has been called a “self-explaining road,” one that announces its own level of risk to drivers, without the need for excessive advice. But, you protest, would it not have been better for that mountain road to have signs warning of the curves or reflector posts guiding the way? Perhaps, but consider the results of a study in Finland that found that adding reflector posts to a curved road resulted in higher speeds and more accidents than when there were no posts. Other studies have found that drivers tend to go faster when a curve is marked with an advisory speed limit than when it is not.
The truth is that the road itself tells us far more than signs do. “If you build a road that’s wide, has a lot of sight distance, has a large median, large shoulders, and the driver feels safe, they’re going to go fast,” says Tom Granda, a psychologist employed by the Federal Highway Administration (FHWA). “It doesn’t matter what speed limit or sign you have. In fact, the engineers who built that road seduced the driver to go that fast.”