Read The Invisible Gorilla: And Other Ways Our Intuitions Deceive Us Online
Authors: Christopher Chabris,Daniel Simons
You won’t be surprised to learn that we favor a mundane explanation for all of these face sightings. Your visual system has a difficult problem
to solve in recognizing faces, objects, and words. They can all appear in a wide variety of conditions: good light, bad light, near, far, oriented at different angles, with some parts hidden, in different colors, and so on. Like an amplifier that you turn up in order to hear a weak signal, your visual system is exquisitely sensitive to the patterns that are most important to you. In fact, visual areas of your brain can be activated by images that only vaguely resemble what they’re tuned for. In just one-fifth of a second, your brain can distinguish a face from other objects like chairs or cars. In just an instant more, your brain can distinguish objects that look a bit like faces, such as a parking meter or a three-prong outlet, from other objects like chairs. Seeing objects that resemble faces induces activity in a brain area called the fusiform gyrus that is highly sensitive to real faces. In other words, almost immediately after you see an object that looks anything like a face, your brain treats it like a face and processes it differently than other objects. That’s one reason why we find it so easy to see facelike patterns as actual faces.
5
The same principles apply to our other senses. Play Led Zeppelin’s “Stairway to Heaven” backward and you may hear “Satan,” “666,” and some other strange words. Play Queen’s “Another One Bites the Dust” backward and the late Freddie Mercury might tell you “it’s fun to smoke marijuana.” This phenomenon can be exploited for fun and profit. A writer named Karen Stollznow noticed a faint outline on a Pop-Tart that could be interpreted as the miter-style hat traditionally worn by the pope. She snapped a digital photo, uploaded it to eBay, and opened up bidding on the “Pope Tart.” Over the course of the auction she exchanged numerous entertaining e-mails with believers and skeptics. By the end, the winning bid was $46. She attributed the relatively low price paid for the Pope Tart to a lack of publicity, as compared with the press releases and television coverage received by the Virgin Mary Grilled Cheese.
6
These examples represent just the tip of the iceberg that is the mind’s hyperactive tendency to spot patterns. Even trained professionals are biased to see patterns they expect to see and not ones that seem inconsistent with their beliefs. Recall Brian Hunter, the hedge fund manager
who lost it all (more than once) by betting on the future price of natural gas. He thought he understood the reasons for the movements of the energy markets, and his inference of a causal pattern in the markets led to his company’s downfall. When pattern recognition works well, we can find the face of our lost child in the middle of a huge crowd at the mall. When it works too well, we spot deities in pastries, trends in stock prices, and other relationships that aren’t really there or don’t mean what we think they do.
Unlike the parade of unusual patients appearing on television dramas like
Grey’s Anatomy
and
House
, or coming to Dr. Keating’s St. Louis diagnostic clinic, the vast majority of the patients whom doctors see on a daily basis have run-of-the-mill problems. Experts quickly recognize common sets of symptoms; they’re sensitized to the most probable diagnoses, learning quite reasonably to expect to encounter the common cold more often than an exotic Asian flu, and ordinary sadness more often than clinical depression.
Intuitively, most people think that experts consider more alternatives and more possible diagnoses rather than fewer. Yet the mark of true expertise is not the ability to consider more options, but the ability to filter out irrelevant ones. Imagine that a child arrives in the emergency room wheezing and short of breath. The most likely explanation might be asthma, in which case treating with a bronchodilator like albuterol should fix the problem. Of course, it’s also possible that the wheezing is caused by something the child swallowed that became lodged in his throat. Such a foreign body could cause all sorts of other symptoms, including secondary infections. On shows like
House
, that rare explanation would of course turn out to be the cause of the child’s symptoms. In reality, though, asthma or pneumonia is a far more likely explanation. An expert doctor recognizes the pattern, and likely has seen many patients with asthma, leading to a quick and almost always accurate diagnosis. Unless your job is like Dr. Keating’s, and you know that
you’re dealing with exceptional cases, focusing too much on the rare causes would be counterproductive. Expert doctors consider first those few diagnoses that are the most probable explanations for a pattern of symptoms.
Experts are, in a sense, primed to see patterns that fit their well-established expectations, but perceiving the world through a lens of expectations, however reasonable, can backfire. Just as people counting basketball passes often fail to notice an unexpected gorilla, experts can miss a “gorilla” if it is an unusual, unexpected, or rare underlying cause of a pattern. This can be an issue when doctors move from practicing in hospitals during their residencies and fellowships to practicing privately, especially if they go into family practice or internal medicine in a more suburban area. The frequencies of diseases doctors encounter in urban teaching hospitals differ greatly from those in suburban medical offices, so doctors must retune their pattern recognizers to the new environment in order to maintain an expert level of diagnostic skill.
Expectations can cause anyone to sometimes see things that don’t exist. Chris’s mother has suffered from arthritis pain in her hands and knees for several years, and she feels that her joints hurt more on days when it is cold and raining. She’s not alone. A 1972 study found that 80–90 percent of arthritis patients reported greater pain when the temperature went down, the barometric pressure went down, and the humidity went up—in other words, when a cold rain was on the way. Medical textbooks used to devote entire chapters to the relationship between weather and arthritis. Some experts have even advised chronic pain patients to move across the country to warmer, drier areas. But does the weather actually exacerbate arthritis pain?
Researchers Donald Redelmeier, a medical doctor, and Amos Tversky, a cognitive psychologist, tracked eighteen arthritis patients over fifteen months, asking them to rate their pain level twice each month. Then they matched these data up with local weather reports from the same time period. All but one of the patients believed that weather changes had affected their pain levels. But when Redelmeier and Tversky mapped the reports of pain to the weather the same day, or the day before,
or two days before, there was no association at all. Despite the strong beliefs of the subjects who participated in their experiment, changes in the weather were entirely unrelated to reports of pain.
Chris told his mother about this study. She said she was sure it was right, but she still felt what she felt. It’s not surprising that pain doesn’t necessarily respond to statistics. So why do arthritis sufferers believe in a pattern that doesn’t exist? What would lead people to think there was an association even when the weather was completely unpredictive? Redelmeier and Tversky conducted a second experiment. They recruited undergraduates for a study and showed them pairs of numbers, one giving a patient’s pain level and the other giving the barometric pressure for that day. Keep in mind that in actuality, pain and weather conditions are unrelated—knowing the barometric pressure is of no use in predicting how much pain a patient experienced that day, because pain is just as likely when it’s warm and sunny as when it’s cold and rainy. In the fake, experimental data there was also no relationship. Yet just like the actual patients, more than half of the undergraduates thought there was a link between arthritis and pain in the data set. In one case, 87 percent saw a positive relationship.
Through a process of “selective matching,” the subjects in this experiment focused on patterns that existed only in subsets of the data, such as a few days when low pressure and pain happened to coincide, and neglected the rest. Arthritis sufferers likely do the same: They remember those days when arthritis pain coincided with cold, rainy weather better than those days when they had pain but it was warm and sunny, and much better than pain-free days, which don’t stand out in memory at all. Putative links between the weather and symptoms are part of our everyday language; we speak of “feeling under the weather” and we think that wearing hats in winter lessens our chances of “catching a cold.” The subjects and the patients perceived an association where none existed because they interpreted the weather and pain data in a way that was consistent with their preexisting beliefs. In essence, they saw the gorilla they expected to see even when it was nowhere in sight.
7
Many introductory psychology textbooks ask students to think about possible reasons why ice cream consumption should be positively associated with drowning rates. More people drown on days when a lot of ice cream is consumed, and fewer people drown on days when only a little ice cream is consumed. Eating ice cream presumably doesn’t cause drowning, and news of drownings shouldn’t inspire people to eat ice cream. Rather, a third factor—the summer heat—likely causes both. Less ice cream is consumed in winter, and fewer people drown then because fewer people go swimming.
8
This example draws attention to the second major bias underlying the illusion of cause—when two events tend to happen together, we infer that one must have caused the other. Textbooks use the ice cream–drowning correlation precisely because it’s hard to see how either one could cause the other, but easy to see how a third, unmentioned factor could cause both. Unfortunately, seeing through the illusion of cause is rarely so simple in the real world.
Most conspiracy theories are based on detecting patterns in events that, when viewed with the theory in mind, seem to help us understand why they happened. In essence, conspiracy theories infer cause from coincidence. The more you believe the theory, the more likely you are to fall prey to the illusion of cause.
Conspiracy theories result from a pattern perception mechanism gone awry—they are cognitive versions of the Virgin Mary Grilled Cheese. Those conspiracy theorists who already believed that President Bush would stage 9/11 to justify a preconceived plan to invade Iraq were quick to see his false memory of seeing the first plane hit the towers as evidence that he knew about the attack in advance. People who already thought that Hillary Clinton would say anything to get elected were quick to jump on her false memory of Bosnian snipers as evidence that she was lying to benefit her campaign. In both cases, people used their understanding of the person to fit the event into a pattern. They inferred an underlying cause, and they were so confident that they had
the right cause that they failed to notice more plausible alternative explanations.
Illustrations of this illusion of cause are so pervasive that undergraduates in our research methods classes have no problem completing our assignment to find a recent media report that mistakenly infers a causal relationship from a mere association. One BBC article, provocatively titled “Sex Keeps You Young,” reported a study by Dr. David Weeks of the Royal Edinburgh Hospital showing that “couples who have sex at least three times a week look more than 10 years younger than the average adult who makes love twice a week.”
9
The caption to an attached photo read, “Regular sex ‘can take years off your looks.’” Although having sex could somehow cause a youthful appearance, it is at least as plausible that having a youthful appearance leads to more sexual encounters, or that a youthful appearance is a sign of physical fitness, which makes frequent sex easier, or that people who appear more youthful are more likely to maintain an ongoing sexual relationship, or … the possible explanations are endless. The statistical association between youthful appearance and sexual activity does not imply that one causes the other. Had the title been phrased in the opposite way, “Looking Young Gets You More Sex,” it would have been equally conclusory, but less surprising and therefore less newsworthy.
Of course, some correlations are more likely to reflect an actual causal relationship than others. Higher summer temperatures are more likely to cause people to eat ice cream than are reports of drownings. Statisticians and social scientists have developed clever ways to gather and analyze correlational data that increase the odds of finding a true causal effect. But the only way—let us repeat,
the only way
—to definitively test whether an association is causal is to run an experiment. Without an experiment, observing an association may just be the scientific equivalent of noticing a coincidence. Many medical studies adopt an epidemiological approach, measuring rates of illness and comparing them among groups of people or among societies. For example, an epidemiological study might measure and compare the overall health of people who eat lots of vegetables with that of people who eat few
vegetables. Such a study could show that people who eat vegetables throughout their lives tend to be healthier than those who don’t. This study would provide scientific evidence for an association between vegetable-eating and health, but it would not support a claim that eating vegetables causes health (or that being healthy causes people to eat vegetables, for that matter). Both vegetable-eating and health could be caused by a third factor—for instance, wealth may enable people to afford both tasty, fresh produce and superior health care. Epidemiological studies are not experiments, but in many cases—such as smoking and lung cancer in humans—they are the best way to determine whether two factors are associated, and therefore have at least a potential causal connection.
Unlike an observed association, though, an experiment systematically varies one factor, known as the independent variable, to see its effect on another factor, the dependent variable. For example, if you were interested in learning whether people are better able to focus on a difficult task when listening to background music than when sitting in silence, you would randomly assign some people to listen to music and others to work in silence and you would measure how well they do on some cognitive test. You have introduced a cause (listening to music or not listening to music) and then observed an effect (differences in performance on the cognitive test). Just measuring two effects and showing that they co-occur does not imply that one causes the other. That is, if you just measure whether people listen to music and then measure how they do on cognitive tasks, you cannot demonstrate a causal link between music listening and cognitive performance. Why not?