Read How We Decide Online

Authors: Jonah Lehrer

How We Decide (24 page)

Kant and his followers thought the rational brain acted like a scientist: we used reason to arrive at an accurate view of the world. This meant that morality was based on objective values; moral judgments described moral facts. But the mind doesn't work this way. When you are confronted with an ethical dilemma, the unconscious automatically generates an emotional reaction. (This is what psychopaths can't do.) Within a few milliseconds, the brain has made up its mind; you know what is right and what is wrong. These moral instincts aren't rational —they've never heard of Kant—but they are an essential part of what keep us all from committing unspeakable crimes.

It's only at this point—
after
the emotions have already made the moral decision—that those rational circuits in the prefrontal cortex are activated. People come up with persuasive reasons to justify their moral intuition. When it comes to making ethical decisions, human rationality isn't a scientist, it's a
lawyer.
This inner attorney gathers bits of evidence, post hoc justifications, and pithy rhetoric in order to make the automatic reaction seem reasonable. But this reasonableness is just a façade, an elaborate self-delusion. Benjamin Franklin said it best in his autobiography: "So convenient a thing it is to be a reasonable creature, since it enables one to find or make a reason for everything one has a mind to do."

In other words, our standard view of morality—the philosophical consensus for thousands of years—has been exactly backward. We've assumed that our moral decisions are the byproducts of rational thought, that humanity's moral rules are founded in such things as the Ten Commandments and Kant's categorical imperative. Philosophers and theologians have spilled lots of ink arguing about the precise logic of certain ethical dilemmas. But these arguments miss the central reality of moral decisions, which is that logic and legality have little to do with anything.

Consider this moral scenario, which was first invented by Haidt. Julie and Mark are siblings vacationing together in the south of France. One night, after a lovely day spent exploring the local countryside, they share a delicious dinner and a few bottles of red wine. One thing leads to another and Julie and Mark decide to have sex. Although she's on the pill, Mark uses a condom just in case. They enjoy themselves very much, but decide not to have sex again. The siblings promise to keep the one-night affair secret and discover, over time, that having sex has brought them even closer together. Did Julie and Mark do something wrong?
*

If you're like most people, your first reaction is that the brother and sister committed a grave sin. What they did was very wrong. When Haidt asks people to explain their harsh moral judgments, the most common reasons given are the risk of having kids with genetic abnormalities and the possibility that sex will damage the sibling relationship. At this point, Haidt politely points out that Mark and Julie used two types of birth control and that having sex actually improved their relationship. But the facts of the case don't matter. Even when their arguments are disproved, people still cling to the belief that having sex with one's brother or sister is somehow immoral.

"What happens in the experiment," Haidt says, "is [that] people give a reason [why the sex is wrong]. When that reason is stripped from them, they give another reason. When the new reason is stripped from them, they reach for
another
reason." Eventually, of course, people run out of reasons: they've exhausted their list of moral justifications. The rational defense is forced to rest its case. That's when people start saying things like "Because it's just wrong to have sex with your sister" or "Because it's disgusting, that's why!" Haidt calls this state "moral dumbfounding." People know something seems morally wrong—sibling sex is a terrible idea—but no one can rationally defend the verdict. According to Haidt, this simple story about sibling sex illuminates the two separate processes that are at work when we make moral decisions. The emotional brain generates the verdict. It determines what is wrong and what is right. In the case of Julie and Mark, it refuses to believe that having sex with a sibling is morally permissible, no matter how many forms of birth control are used. The rational brain, on the other hand,
explains
the verdict. It provides reasons, but those reasons all come after the fact.

This is why psychopaths are so dangerous: they are missing the emotions that guide moral decisions in the first place. There's a dangerous void where their feelings are supposed to be. For people like Gacy, sin is always intellectual, never visceral. As a result, a psychopath is left with nothing but a rational lawyer inside his head, willing to justify any action. Psychopaths commit violent crimes because their emotions never tell them not to.

2

Moral decisions are a unique kind of decision. When you're picking out products in the grocery store, searching for the best possible strawberry jam, you are trying to maximize your own enjoyment. You are the only person that matters; it is your sense of pleasure that you are trying to please. In this case, selfishness is the ideal strategy. You should listen to those twitchy cells in the orbitofrontal cortex that tell you what you really want.

However, when you are making a moral decision, this egocentric strategy backfires. Moral decisions require taking
other people
into account. You can't act like a greedy brute or let your anger get out of control; that's a recipe for depravity and jail time. Doing the right thing means thinking about everybody else, using the emotional brain to mirror the emotions of strangers. Selfishness needs to be balanced by some selflessness.

The evolution of morality required a whole new set of decision-making machinery. The mind needed to evolve some structures that would keep it from hurting other people. Instead of just seeking more pleasure, the brain had to become sensitive to the pain and plight of strangers. The new neural structures that developed are a very recent biological adaptation. While people have the same reward pathway as rats—every mammal relies on the dopamine system—moral circuits can be found in only the most social primates. Humans, of course, are the most social primates of all.

The best way to probe the unique brain circuits underlying morality is by using a brain scanner to study people while they are making moral decisions. Consider this elegant experiment, led by neuroscientist Joshua Greene of Harvard. Greene asked his subjects a series of questions involving a runaway trolley, an oversize man, and five maintenance workers. (It might sound like a strange setup, but it's actually based on a well-known philosophical thought puzzle.) The first scenario goes like this:

You are the driver of a runaway trolley. The brakes have failed. The trolley is approaching a fork in the track at top speed. If you do nothing, the train will stay left, where it will run over five maintenance workers who are fixing the track. All five workers will die. However, if you steer the train right—this involves flicking a switch and turning the wheel—you will swerve onto a track where there is one maintenance worker. What do you do? Are you willing to intervene and change the path of the trolley?

In this hypothetical case, about 95 percent of people agree that it is morally permissible to turn the trolley. The decision is just simple arithmetic: it's better to kill fewer people. Some moral philosophers even argue that it is immoral
not
to turn the trolley, since passivity will lead to the death of four more people. But what about this scenario:

You are standing on a footbridge over the trolley track. You see a trolley racing out of control, speeding toward five workmen who are fixing the track. All five men will die unless the trolley can be stopped. Standing next to you on the footbridge is a very large man. He is leaning over the railing, watching the trolley hurtle toward the men. If you sneak up on the man and give him a little push, he will fall over the railing and into the path of the trolley. Because he is so big, he will stop the trolley from killing the maintenance workers. Do you push the man off the footbridge? Or do you allow five men to die?

The brute facts, of course, remain the same: one man must die in order for five men to live. If ethical decisions were perfectly rational, then a person would act the same way in both situations and be as willing to push the man off the bridge as he or she was to turn the trolley. And yet, almost nobody is willing to actively throw another person onto the train tracks. The decisions lead to the same outcome, yet one is moral and one is murder.

Greene argues that pushing the man feels wrong because the killing is direct: you are using your body to hurt his body. He calls it a
personal
moral situation, since it directly involves another person. In contrast, when you just have to turn the trolley onto a different track, you aren't directly hurting somebody else, you're just shifting the trolley wheels; the resulting death seems indirect. In this case, it's an
impersonal
moral decision.

What makes this thought experiment so interesting is that the fuzzy moral distinction—the difference between personal and impersonal decisions—is built into the brain. It doesn't matter what culture you live in, or what religion you subscribe to: the two different trolley scenarios trigger distinct patterns of activation. In the first scenario, when a subject was asked whether the trolley should be turned, the rational decision-making machinery was turned on. A network of brain regions assessed the various alternatives, sent their verdict onward to the prefrontal cortex, and the person chose the clearly superior option. The brain quickly realized that it was better to kill one man than five men.

However, when a subject was asked whether he would be willing to push a man onto the tracks, a separate network of brain areas was activated. These folds of gray matter—the superior temporal sulcus, posterior cingulate, and medial frontal gyrus—are responsible for interpreting the thoughts and feelings of
other
people. As a result, the subject automatically imagined how the poor man would feel as he plunged to his death on the train tracks below. He vividly simulated his mind and concluded that pushing him was a capital crime, even if it saved the lives of five other men. The person couldn't explain the moral decision—the inner lawyer was confused by the inconsistency—but his certainty never wavered. Pushing a man off a bridge just
felt
wrong.

While stories of Darwinian evolution often stress the amorality of natural selection—we are all Hobbesian brutes, driven to survive by selfish genes—our psychological reality is much less bleak. We aren't angels, but we also aren't depraved hominids. "Our primate ancestors," Greene explains, "had intensely social lives. They evolved mental mechanisms to keep them from doing all the nasty things they might otherwise be interested in doing. This basic primate morality doesn't understand things like tax evasion, but it does understand things like pushing your buddy off of a cliff." As Greene puts it, a personal moral violation can be roughly defined as "me hurts you," a concept simple enough for a primate to understand.

This is a blasphemous idea. Religious believers assume that God invented the moral code. It was given to Moses on Mount Sinai, a list of imperatives inscribed in stone. (As Dostoyevsky put it, "If there is no God, then we are lost in a moral chaos. Everything is permitted.") But this cultural narrative gets the causality backward. Moral emotions existed long before Moses. They are writ into the primate brain. Religion simply allows us to codify these intuitions, to translate the ethics of evolution into a straightforward legal system. Just look at the Ten Commandments. After God makes a series of religious demands—don't worship idols and always keep the Sabbath—He starts to issue moral orders. The first order is the foundation of primate morality: thou shalt not kill. Then comes a short list of moral adjuncts, which are framed in terms of harm to another human being. God doesn't tell us merely not to lie; He tells us not to bear false witness against our neighbor. He doesn't prohibit jealousy only in the abstract; He commands us not to covet our neighbor's "wife or slaves or ox or donkey." The God of the Old Testament understands that our most powerful moral emotions are generated in response to personal moral scenarios, so that's how He frames all of His instructions. The details of the Ten Commandments reflect the details of the evolved moral brain.

These innate emotions are so powerful that they keep people moral even in the most amoral situations. Consider the behavior of soldiers during war. On the battlefield, men are explicitly encouraged to kill one another; the crime of murder is turned into an act of heroism. And yet, even in such violent situations, soldiers often struggle to get past their moral instincts. During World War II, for example, U.S. Army Brigadier General S.L.A. Marshall undertook a survey of thousands of American troops right after they'd been in combat. His shocking conclusion was that less than 20 percent actually shot at the enemy, even when under attack. "It is fear of killing," Marshall wrote, "rather than fear of being killed, that is the most common cause of battle failure in the individual." When soldiers were forced to confront the possibility of directly harming other human beings—this is a personal moral decision—they were literally incapacitated by their emotions. "At the most vital point of battle," Marshall wrote, "the soldier becomes a conscientious objector."

After these findings were published, in 1947, the U.S. Army realized it had a serious problem. It immediately began revamping its training regimen in order to increase the "ratio of fire." New recruits began endlessly rehearsing the kill, firing at anatomically correct targets that dropped backward after being hit. As Lieutenant Colonel Dave Grossman noted, "What is being taught in this environment is the ability to shoot reflexively and instantly ... Soldiers are de-sensitized to the act of killing, until it becomes an automatic response." The army also began emphasizing battlefield tactics, such as high-altitude bombing and long-range artillery, that managed to obscure the personal cost of war. When bombs are dropped from forty thousand feet, the decision to fire is like turning a trolley wheel: people are detached from the resulting deaths.

Other books

The Next Right Thing by Dan Barden
Every Time I Love You by Graham, Heather
La CIA en España by Alfredo Grimaldos
A Sentimental Traitor by Dobbs, Michael
The Cold Blue Blood by David Handler
Kissing Kendall by Jennifer Shirk


readsbookonline.com Copyright 2016 - 2024