Read The Invisible Gorilla: And Other Ways Our Intuitions Deceive Us Online
Authors: Christopher Chabris,Daniel Simons
Homer assumes that the bear patrol kept away bears, but it really did nothing at all—the first bear sighting was an anomaly that would not have recurred in any case. The scene is funny because the causal relationship is so outlandish. Rocks don’t keep tigers away, but Homer draws the inference anyway because the chronology of events induced an illusion of cause. In other cases, when the causal relationship seems plausible, people naturally accept it rather than think about alternatives, and the consequences can be much greater than overpaying for an anti-tiger rock.
In April 2009, the Supreme Court of the United States heard oral arguments in the case of
Northwest Austin Municipal Utility District No. 1 v. Holder
. At issue was the Voting Rights Act, one of the federal civil rights laws enacted during the 1960s. Among other things, the law
sought to prevent political jurisdictions (utility districts, cities, school boards, counties, etc.) in southern states from drawing boundaries and setting up election rules so as to favor the interests of white over black voters. Section 5 of the law required these states to obtain “preclearance” from the federal government before changing any election procedures. The Texas utility district argued that since the law imposed these requirements only on some of the states in the union (mostly those that had been—a hundred years earlier—part of the Confederacy), it unconstitutionally discriminated against them.
Chief Justice John Roberts asked Neal Katyal, the government’s lawyer, about the import of the fact that just one out of every two thousand applications for an election rule change is rejected. Katyal answered, “I think what that represents is that Section 5 is actually working very well; that it provides a deterrent.” Roberts might have had the bear patrol episode in the back of his mind when he replied: “Well, that’s like the old elephant whistle—you know, I have this whistle to keep away the elephants. You know, well, that’s silly. Well, there are no elephants, so it must work.”
15
Roberts’s point, though he expressed it in the language of
The Simpsons
rather than that of cognitive psychology, is that the illusion of cause can make us assume that one event (the passage of the law) caused another event (the virtual end of discriminatory election rules), when the available data don’t logically establish such a relationship. The fact that the government grants preclearance almost every time says nothing about whether the law caused compliance. Something other than the law—such as a gradual reduction of racism, or at least overtly racist practices, over time—might have caused the change.
We are taking no position on whether this part of the Voting Rights Act is necessary today; it may be or it may not be. But this is precisely the point: We have no way to know how useful it is if the only information we have is that virtually nobody is violating it. It’s possible that they would behave consistently with the proscriptions of the law even if it were no longer on the books.
The problem illustrated by the arguments over the Voting Rights Act is endemic in public policy. How many laws are passed, renewed, or
repealed on the basis of a truly causal understanding of their effects on behavior? We often speak of the clichéd danger of unintended consequences, but we rarely think about how little we can actually say about the
intended
consequences of government action. We know what was happening before the law or regulation went into effect, and we may know that something different happened afterward, but that alone does not prove that the law
caused
the difference. The only way to measure the causal effect of a law would be to conduct an experiment. In the case of the Voting Rights Act, the closest one could come would be to repeal Section 5 for a randomly selected group of jurisdictions and compare those with the rest over time, examining how many discriminatory electoral rules are enacted in each case. If the rate of discrimination differs between the two groups, then we could infer that the law has an effect.
16
Of course, the law might still violate the Constitution, but there are some questions that even clever experimentation and data analysis can’t answer!
This tendency to neglect alternative paths to the same outcome in favor of a single narrative pervades many of the bestselling business books.
17
Almost every report claiming to identify the key factors that lead companies to succeed, from
In Search of Excellence
to
Good to Great
, errs by considering only companies that succeeded and then analyzing what they did. They don’t look at whether other companies did those same things and failed. Malcolm Gladwell’s bestseller
The Tipping Point
describes the remarkable reversal of fortune for the maker of unfashionable Hush Puppies after their shoes suddenly became trendy. Gladwell argues that Hush Puppies succeeded because they were adopted by a trendy subculture, which made them appealing and generated buzz. And he’s right that Hush Puppies generated buzz. But the conclusion that the buzz caused their success follows only from a retrospective narrative bias and not from an experiment. In fact, it’s not even clear that there’s an association between buzz and success in the data. To establish even a noncausal association we would need to know how many other similar companies took off without first generating a buzz, and how many other companies generated similar buzz but remained grounded. Only then could we start worrying about whether the buzz caused the
success—or whether the causation really ran in the other direction (success leading to buzz), or even in both directions simultaneously (a virtuous cycle).
There is one final pitfall inherent in turning chronology into causality. Because we perceive sequences of events as part of a timeline, with one leading to the next, it is hard to see that there are almost always many interrelated reasons or causes for a single outcome. The sequential nature of time leads people to act as though a complex decision or event must have only a single cause. We make fun of the enthusiasts of conspiracy theories for thinking this way, but they are just operating under a more extreme form of the illusion of cause that affects us all. Here are some statements made by Chris Matthews, host of the MSNBC news program
Hardball
, about the origins of the 2003 U.S. invasion of Iraq:
“What is
the motive
for this war?” (February 4, 2003)“I wanted to know whether 9/11 is
the reason
, because a lot of people think it’s payback.” (February 6, 2003)“Do you believe the weapons of mass destruction was
the reason
for this war?” (October 24, 2003)“…
the reason
we went to war with Iraq was not to make a better Iraq. It was to kill the bad guys.” (October 31, 2003)“President Bush says he wants democracy to spread throughout the Middle East. Was that
the real reason
behind the war in Iraq?” (November 7, 2003)“Why do you think we went to Iraq?
The real reason
, not the sales pitch.” (October 9, 2006)“Their reason
for this war, which they don’t regret, was never
the reason
they used to sell us on the war.” (January 29, 2009)
We added the emphasis in each statement to show how it presupposes that the war must have had a single motive, reason, or cause. In the
mind of a decision maker (or perhaps a “decider,” in this case), there might seem to be just one reason for a decision. But of course nearly every complex decision has multiple, complex causes. In this case, even as he searched for the one true reason, Matthews identified a wide variety of possibilities: weapons of mass destruction, Iraq’s support of terrorism, Saddam Hussein’s despotism, and the strategic goal of establishing democracy in Arab countries, to name only the most prominent. And they all arose against the backdrop of a new post-9/11 sensitivity to the possibility of enemies launching attacks on the U.S. homeland. Had one or some of these preconditions not been in place, the war might not have been launched. But it is not possible to isolate just one of them after the fact and say it was
the reason
for the invasion.
18
This kind of faulty reasoning about cause and effect is just as common in business as in politics. Sherry Lansing, long described as the most powerful woman in Hollywood, was CEO of Paramount Pictures from 1992 to 2004. She oversaw megahits like
Forrest Gump
and
Titanic
, and films from her studio received three Academy Awards for Best Picture. According to an article in the
Los Angeles Times
, after a series of failed projects and declines in Paramount’s share of box-office revenues, Lansing’s contract was not renewed. She resigned a year early, and it was widely believed that she had effectively been fired for poor performance. But just as the hits weren’t due solely to her genius, the duds couldn’t have been due solely to her screwups—hundreds of other people have creative influence on each movie, and hundreds of factors determine whether a movie captures the imagination (and cash) of audiences.
Lansing’s successor, Brad Grey, was lauded for turning the studio around; two of the first films released under his leadership,
War of the Worlds
and
The Longest Yard
, were top grossers in 2005. However, both movies were conceived and produced during Lansing’s tenure. If she had just hung on for a few more months, she would have received the credit and might have remained in charge.
19
There’s no doubt that a CEO is officially responsible for the performance of her company, but attributing all of the company’s successes or failures to the one person at the top is a classic illustration of the illusion of cause.
Let’s return to the story that began this chapter, about the six-year-old girl who contracted measles at a church meeting in Indiana after an unvaccinated missionary returned from Romania and spread the disease. We asked why parents would forgo a vaccine that helped to eliminate a serious and extremely contagious childhood disease. Now that we have discussed the three biases underlying the illusion of cause—overzealous pattern detection mechanisms, the unjustified leap from correlation to causation, and the inherent appeal of chronological narratives—we can begin to explain why some people voluntarily choose not to vaccinate their children against measles. The answer is that these parents, the media, some high-profile celebrities, and even some doctors have fallen prey to the illusion of cause. More precisely, they perceive a pattern where none actually exists and confuse a coincidence of timing for a causal relationship.
Autism is a pervasive developmental disorder that currently affects about 1 in 110 children. The diagnosis of autism has become more common over the past decade in the United States.
20
The symptoms of autism include delayed or impaired language and social skills. Prior to age two, most children engage in “parallel play”—doing the same things as other children they play with, but not interacting directly with them. And many kids are not very verbal before age two. Autism is most frequently diagnosed during preschool, when typically developing children start playing interactively and their language development accelerates. Many parents of autistic children begin noticing that something isn’t quite right with their kids around age two, and in some relatively rare cases, a child who had been developing normally starts to regress and loses the ability to communicate. These symptoms tend to be most noticeable to parents not long after their children have been vaccinated for measles, mumps, and rubella (MMR). In other words, the most clear-cut symptoms of autism become much more pronounced after childhood vaccinations.
By now, you should recognize the harbingers of the illusion of cause. Parents and scientists seeking a cause for the increase in autism rates
spotted this association and inferred a causal relationship. Parents who saw no symptoms before the vaccinations noticed them afterward, a chronological pattern consistent with a causal narrative. They also noticed that increases in vaccination rates roughly coincided with increases in the diagnosis of autism. All three of the major contributors to the illusion of cause—pattern, correlation, and chronology—converged in this case. Of course, the increase in the frequency of the autism diagnosis also coincided with an increase in piracy off the coast of Somalia, but nobody argues that autism causes piracy (or that pirates cause autism, for that matter). The association has to have a plausible causal link, a connection that makes intuitive sense on its surface. It needs to provide an “Aha!” experience, one that taps our pattern perception mechanisms and triggers the illusion of cause. It needs more than the perception of an intuitive causal link to become a popular movement, though. It needs a credible authority to validate the causal link. In the case of vaccines and autism, it needed Dr. Andrew Wakefield.
21
Andrew Wakefield was a prominent London physician who in 1998 announced the discovery of a link between autism and the MMR vaccine. He and a group of colleagues published an article in the medical journal
The Lancet
that suggested a link between the MMR vaccine and several cases of autism.
22
At a press conference on the day his paper was released, Wakefield explained how he came to this belief: “In 1995, I was approached by parents—articulate, well-educated, and concerned—who told me the stories of their children’s deterioration into autism … Their children had developed normally for the first fifteen to eighteen months of life when they received the MMR vaccination. But after a variable period the children regressed, losing speech, language, social skills, and imaginative play, descending into autism.”
23
Wakefield’s announced link between autism and the so-called “triple jab” received extensive popular media attention, which likely led some parents to begin refusing MMR vaccination for their children, in turn contributing to reduced population immunity to measles in Great Britain.