Authors: Jonah Lehrer
If people were perfectly rational agents, if the brain weren't so bounded, then writing down the last two digits of their Social Security numbers should have no effect on their auction bids. In other words, a student whose Social Security number ended with a low-value figure (such as 10) should be willing to pay roughly the same price as someone with a high-value figure (such as 90). But that's not what happened. For instance, look at the bidding for the cordless keyboard. Students with the highest-ending Social Security numbers (80â80) made an average bid of fifty-six dollars. In contrast, students with the lowest-ending numbers (1â1) made an average bid of a paltry sixteen dollars. A similar trend held for every single item. On average, students with higher numbers were willing to spend
300 percent
more than those with low numbers. All of the business students realized, of course, that the last two digits of their Social Security numbers were completely irrelevant. Such a thing shouldn't influence their bids. And yet, it clearly did.
This is known as the anchoring effect, since a meaningless anchorâin this case, a random numberâcan have a strong impact on subsequent decisions.
*
While it's easy to mock the irrational bids of the business students, the anchoring effect is actually a common consumer mistake. Consider the price tags in a car dealership. Nobody actually pays the prices listed in bold black ink on the windows. The inflated sticker is merely an anchor that allows the car salesperson to make the real price of the car seem like a better deal. When a person is offered the inevitable discount, the prefrontal cortex is convinced that the car is a bargain.
In essence, the anchoring effect is about the brain's spectacular inability to dismiss irrelevant information. Car shoppers should ignore the manufacturers' suggested retail prices, just as MIT grad students should ignore their Social Security numbers. The problem is that the rational brain isn't good at disregarding facts, even when it knows those facts are useless. And so, if someone is looking at a car, the sticker price serves as a point of comparison, even though it's merely a gimmick. And when a person in the MIT experiment is making a bid on a cordless keyboard, she can't help but tender an offer that takes her Social Security number into account, simply because that number has already been placed into the pertinent decision-making ledger. The random digits are stuck in her prefrontal cortex, occupying valuable cognitive space. As a result, they become a starting point when she thinks about how much she's willing to pay for a computer accessory. "You know you're not supposed to think about these meaningless numbers," Ariely says. "But you just can't help it."
The fragility of the prefrontal cortex means that we all have to be extremely vigilant about not paying attention to unnecessary information. The anchoring effect demonstrates how a single additional fact can systematically distort the reasoning process. Instead of focusing on the important variableâhow much is that cordless keyboard really worth?âwe get distracted by some meaningless numbers. And then we spend too much money.
This cortical flaw has been exacerbated by modernity. We live in a culture that's awash in information; it's the age of Google, cable news, and free online encyclopedias. We get anxious whenever we are cut off from all this knowledge, as if it's impossible for anyone to make a decision without a search engine. But this abundance comes with some hidden costs. The main problem is that the human brain wasn't designed to deal with such a surfeit of data. As a result, we are constantly exceeding the capacity of our prefrontal cortices, feeding them more facts and figures than they can handle. It's like trying to run a new computer program on an old machine; the antique microchips try to keep up, but eventually they fizzle out.
In the late 1980s, the psychologist Paul Andreassen conducted a simple experiment on MIT business students. (Those poor students at MIT's Sloan School of Management are very popular research subjects. As one scientist joked, "They're like the fruit fly of behavioral economics.") First, Andreassen let each of the students select a portfolio of stock investments. Then he divided the students into two groups. The first group could see only the changes in the prices of their stocks. They had no idea why the share prices rose or fell and had to make their trading decisions based on an extremely limited amount of data. In contrast, the second group was given access to a steady stream of financial in formation. They could watch CNBC, read the
Wall Street Journal,
and consult experts for the latest analysis of market trends.
So which group did better? To Andreassen's surprise, the group with less information ended up earning more than twice as much as the well-informed group. Being exposed to extra news was distracting, and the high-information students quickly became focused on the latest rumors and insider gossip. (Herbert Simon said it best: "A wealth of information creates a poverty of attention.") As a result of all the extra input, these students engaged in far more buying and selling than the low-information group. They were convinced that all their knowledge allowed them to anticipate the market. But they were wrong.
The dangers of too much information aren't confined to investors. In another study, college counselors were given a vast amount of information about a group of high school students. The counselors were then asked to predict the grades of these kids during their freshman year in college. The counselors had access to high school transcripts, test scores, the results of personality and vocational tests, and application essays from the students. They were even granted personal interviews so that they could judge the "academic talents" of the students in person. With access to all of this information, the counselors were extremely confident that their judgments were accurate.
The counselors were competing against a rudimentary mathematical formula composed of only two variables: the high school grade point average of the student and his or her score on a single standardized test. Everything else was deliberately ignored. Needless to say, the predictions made by the formula were far more accurate than the predictions made by the counselors. The human experts had looked at so many facts that they lost track of which facts were actually important. They subscribed to illusory correlations ("She wrote a good college essay, so she'll write good essays in college") and were swayed by irrelevant details ("He had such a nice smile"). While the extra information considered by the counselors made them extremely confident, it actually led to worse predictions. Knowledge has diminishing returns, right up until it has negative returns.
This is a counterintuitive idea. When making decisions, people almost always assume that more information is better. Modern corporations are especially beholden to this idea and spend a fortune trying to create "analytic workspaces" that "maximize the informational potential of their decision-makers." These managerial cliches, plucked from the sales brochures of companies such as Oracle and Unisys, are predicated on the assumptions that executives perform better when they have access to more facts and figures and that bad decisions are a result of ignorance.
But it's important to know the limitations of this approach, which are rooted in the limitations of the brain. The prefrontal cortex can handle only so much information at any one time, so when a person gives it too many facts and then asks it to make a decision based on the facts that
seem
important, that person is asking for trouble. He is going to buy the wrong items at Wal-Mart and pick the wrong stocks. We all need to know about the innate frailties of the prefrontal cortex so that we don't undermine our decisions.
BACK PAIN
is a medical epidemic. The numbers are sobering: there's a 70 percent chance that at some point in your life, you'll suffer from it. There's a 30 percent chance that you've suffered from severe back pain in the last thirty days. At any given time, about 1 percent of working-age Americans are completely incapacitated by their lower lumbar regions. Treatment is expensive (more than $26 billion a year) and currently accounts for about 3 percent of total health-care spending. If workers' compensation and disability payments are taken into account, the costs are far higher.
When doctors first started to encounter a surge in patients with back painâthe beginning of the epidemic is generally dated to the late 1960sâthey had few answers. The lower back is an exquisitely complicated body area, full of tiny bones, ligaments, spinal discs, and minor muscles. And then there's the spinal cord itself, a thick sheath of sensitive nerves that can be easily upset. There are so many moving parts in the back that doctors had difficulty figuring out what exactly was responsible for the pain. Without a definitive explanation, doctors typically sent patients home with a prescription for bed rest.
But this simple treatment plan was extremely effective. Even when nothing was done to the lower back, about 90 percent of patients with back pain managed to get better within seven weeks. The body healed itself, the inflammation subsided, the nerves relaxed. These patients went back to work and pledged to avoid the sort of physical triggers that had caused the pain in the first place.
Over the next few decades, this hands-off approach to back pain remained the standard medical treatment. Although the vast majority of patients didn't receive a specific diagnosis of what caused the painâthe suffering was typically parceled into a vague category such as "lower lumbar strain"âthey still managed to experience significant improvements within a short period of time. "It was a classic case of medicine doing best by doing least," says Dr. Eugene Carragee, a professor of orthopedic surgery at Stanford. "People got better without real medical interventions because doctors didn't know how to intervene."
That all changed with the introduction of magnetic resonance imaging (MRI) in the late 1980s. Within a few years, the MRI machine became a crucial medical tool. It allowed doctors to look, for the first time, at stunningly accurate images of the interior of the body. MRI machines use powerful magnets to make protons in the flesh shift ever so slightly. Different tissues react in slightly different ways to this atomic manipulation; a computer then translates the resulting contrasts into high-resolution images. Thanks to the precise pictures produced by the machine, doctors no longer needed to imagine the layers of matter underneath the skin. They could see everything.
The medical profession hoped that the MRI would revolutionize the treatment of lower back pain. Since doctors could finally image the spine and surrounding soft tissue in lucid detail, they figured they'd be able to offer precise diagnoses of what was causing the pain, locating the aggravated nerves and structural problems. This, in turn, would lead to better medical care.
Unfortunately, MRIs haven't solved the problem of back pain. In fact, the new technology has probably made the problem worse. The machine simply sees too much. Doctors are overwhelmed with information and struggle to distinguish the significant from the irrelevant. Take, for example, spinal disc abnormalities. While x-rays can reveal only tumors and problems with the vertebral bones, MRIs can image spinal discsâthe supple buffers between the vertebraeâin meticulous detail. After the imaging machines were first introduced, the diagnoses of various disc abnormalities began to skyrocket. The MRI pictures certainly looked bleak: people with pain seemed to have seriously degenerated discs, which everyone assumed caused inflammation of the local nerves. Doctors began administering epidurals to quiet the pain, and if the pain persisted, they would surgically remove the apparently offending disc tissue.
The vivid images, however, were misleading. Those disc abnormalities are seldom the cause of chronic back pain. In a 1994 study published in the
New England Journal of Medicine
, a group of researchers imaged the spinal regions of ninety-eight people who had no back pain or back-related problems. The pictures were then sent to doctors who didn't know that the patients weren't in pain. The result was shocking: the doctors reported that two-thirds of these normal patients exhibited "serious problems" such as bulging, protruding, or herniated discs. In 38 percent of these patients, the MRI revealed multiple damaged discs. Nearly 90 percent of these patients exhibited some form of "disc degeneration." These structural abnormalities are often used to justify surgery, and yet nobody would advocate surgery for people without pain. The study concluded that, in most cases, "The discovery by MRI of bulges or protrusions in people with low back pain may be coincidental."
In other words, seeing everything made it harder for the doctors to know what they should be looking at. The very advantage of MRIâits ability to detect tiny defects in tissueâturned out to be a liability, since many of the so-called defects were actually normal parts of the aging process. "A lot of what I do is educate people about what their MRIs are showing," says Dr. Sean Mackey, a professor at the Stanford School of Medicine and associate director of the hospital's pain-management division. "Doctors and patients get so fixated on these slight disc problems, and then they stop thinking about other possible causes for the pain. I always remind my patients that the only perfectly healthy spine is the spine of an eighteen-year-old. Forget about your MRI. What it's showing you is probably not important."
The mistaken explanations for back pain triggered by MRIs inevitably led to an outbreak of bad decisions. A large study published in the
Journal of the American Medical Association (JAMA)
randomly assigned 380 patients with back pain to undergo two different types of diagnostic analysis. One group received x-rays. The other group got diagnosed using MRIs, which gave the doctors much more information about the underlying anatomy.
Which group fared better? Did better pictures lead to better treatments? There was no difference in patient outcome: the vast majority of people in both groups got better. More information didn't lead to less pain. But stark differences emerged when the study looked at
how
the different groups were treated. Nearly 50 percent of MRI patients were diagnosed with some sort of disc abnormality, and this diagnosis led to intensive medical interventions. The MRI group had more doctor visits, more injections, more physical therapy, and were more than twice as likely to undergo surgery. These additional treatments were very expensive, and they had no measurable benefit.