Read An Introduction to Evolutionary Ethics Online
Authors: Scott M. James
Tags: #Philosophy, #Ethics & Moral Philosophy, #General
But on reflection, we – as theorists –
can
separate these rules into kinds.
Conventional rules
, as they're sometimes called, get their authority from some convention or practice or person. Raising your hand in class is a rule that gets its meaning and authority from conventions associated with school; putting your dishes away gets its meaning and authority from conventions associated with the family; not picking your nose gets its meaning and authority from conventions associated with etiquette; and so on.
Moral rules
, on the other hand, do not seem to depend on conventions. They appear to be authority-independent. They're treated as more serious, “generalizable,” and unconditionally obligatory (recall our discussion of morality from §3.1. When we justify following them, we're likely to appeal to such things as justice, a person's welfare, and rights. Now here's the surprising thing: despite the indiscriminate way rules are presented to children, they nevertheless grasp the distinction between conventional rules and moral rules. Here's how we know.
In the mid-1980s the psychologist Eliot Turiel (1983) and his colleagues conducted experiments (which have been replicated numerous times in different settings) asking children as young as 3 to consider some hypothetical situations involving rule-changes. For example, children are told to imagine their teacher saying: “Today, if you want to talk in class, you do
not
have to raise your hand.” Children are then asked, “Would it be OK if today you talked in class without raising your hand?” With no hesitation, children say
yes
. Similarly, children say it would be OK to throw food if their parents said it would be OK to throw food.
But then children are asked to imagine their teacher saying: “Today, if you want to hit your friend in the face, you may do so.” In this case, children
almost never
say it would be OK to hit their friend, even with their teacher's permission. Most children – asked to imagine a parent saying: “Today, it would OK for you to lie to your brother” – will nevertheless
deny
that it would be OK to lie to your brother. In a dramatic display of this distinction, the psychologist Larry Nucci asked Amish children to imagine a situation in which God made no rule against working on Sunday: 100 percent of the children said that, under those circumstances, it would be OK to work on Sunday. By contrast, when the children were asked to imagine a situation in which God made no rule against hitting others, 80 percent of the children said that, under those circumstances, hitting others would nevertheless
not
be OK (Nucci
et al.
1983).
The most immediate implication of this research is this: very young children seem to grasp the difference between conventional rules (e.g., raise your hand before speaking) and moral rules (e.g., don't hit others). Children seem to recognize that some rules can be suspended by an authority (e.g., food-throwing) and some rules cannot (e.g., no hitting). And what's so striking about this is that children recognize this difference
despite
the fact that the difference is not at all apparent in their upbringing. (We'll come back to this point shortly.) Shaun Nichols (2004) has drawn attention to another important implication of Turiel's work: many of the children who display an understanding of the difference between conventional and moral rules
fail
the “false belief” test. These are, after all, 3-year-olds. But if a child can grasp that some rules are authority-independent, generalizable, and unconditionally obligatory
without
recognizing the mental states of others, then maybe moral competency does not require perspective-taking, as we previously thought. According to Nichols, the rules children regard as authority-independent, generalizable, and serious “constitute an important core of moral judgment” (2004: 7). But if these same children have not developed a proper “theory of mind,” then maybe a theory of mind is not necessary for core moral judgments. Maybe the rules of morality are even more basic to our psychology than we thought. Perhaps there is a core of moral knowledge or competency that is
innate
.
5.4 Moral Innateness and the Linguistic Analogy One of the more lively debates now going on among moral psychologists concerns just this question of innateness. To what extent, if any, is morality innate? Answering this question of course requires straightening out two things: what is
morality
and what is
innateness
? The first of these questions was addressed in §3.1, where (following Richard Joyce) we proposed a list of conditions that a creature must meet in order to be considered moral. Nichols is prepared to accept a slightly less demanding set of requirements. For him, a child who insists that it would
not
be OK to hit your friend even if the teacher said it was OK is morally competent. On Joyce's account, we would need to know more about the child in order to deem her morally competent. For our purposes, we do not decide on the issue here: the results are worth discussing in either case.
The concept of innateness plays a role in a range of disciplines – biology, psychology, philosophy. It's no surprise then that the concept gets handled in different ways depending on which discipline employs it. In some places (e.g., biology), the concept is sometimes understood to mean “environmental invariance,” as in: “a trait is innate just in case it reliably develops across varying environments.” In other places (e.g., psychology), the concept is sometimes used to mean “not acquired via a psychological process.” Fortunately, we can explore some of the data without having to enter into this fight over definitions, for the data appear to suggest innateness according to most definitions.
But the case for moral innateness – at least according to recent defenders – begins in another neighborhood, one that has received a lot of attention in the last forty years:
linguistics,
that is, the study of language. To get things rolling, let's try a little interactive linguistics. Consider the following sentence: (1)
John said that Bill would feed himself
.
In (1) must the word “himself” refer to John or Bill? Or could it refer to either of them or someone altogether different? (Lest there are any doubts, answers are revealed in note 5.) Try this sentence: (2)
John told Bill that he would feed himself
.
In (2) must the word “himself” refer to John or Bill? Or could it refer to either of them or someone altogether different? (Take all the time you need.) How about the following: (3)
John said that Bill saw him
.
In (3) must the word “him” refer to Bill or John? Or could it refer to either of them or someone altogether different? Try converting the following declarative into an imperative – that is, into a question: (4)
The boy who is happy is missing
.
In (4) is the correct imperative “Is the boy who happy is missing?” or “Is the boy who is happy missing?” Convert this last one into an imperative: (5)
The duck is sad that the chicken is missing
.
In (5) is the correct imperative “Is the duck sad that the chicken is missing?” or “Is the duck is sad that the chicken missing?”
Unless this chapter is putting you to sleep or you suffer from a specific language deficit, none of these inferences required much thought.
5
But this is not what impresses linguists. What impresses linguists is not how
we
perform on them, but how
4-year-olds
perform on them. 4-year-olds perform nearly flawlessly on these sorts of questions. Take the following example.
Experimenters introduce 4-year-olds to a puppet. They tell the children to ask the puppet if the boy who is happy is missing. Remember that this is a sentence no child has ever heard before, and its structure is also probably very new to children – a sentence with embedded auxiliary verbs (i.e. “is”). Now children have probably heard plenty of people transform declaratives, such as “The cat was in the hat,” into imperatives: “Was the cat in the hat?” Much less common to a child's ear are transformations of these more complex sentences. So surely
some
children, when they're asked to transform the sentence “The boy who is happy is missing,” are going to say: “Is the boy who happy is missing?” After all, there are
two
auxiliary verbs to choose from; surely some kid is going to move the first one to the beginning of the sentence instead of the second. But 4-year-olds almost
never
do this. They always hit upon the correct transformation.
OK, maybe children are just lucky. Maybe they take a chance (all of them!) and plump for a rule that says that in transforming declaratives with embedded auxiliaries into imperatives, always choose the
last
auxiliary verb to move to the beginning of the sentence. To test this, experimenters give these same 4-year-olds this sentence: “The duck is sad that the chicken is missing.” Now if children really are using the
Last Auxiliary Verb
rule, then we should hear them say: “Is the duck is sad that the chicken missing?” But this word salad almost never comes out of children's mouths. Somehow they know that, in
this
sentence, it's the first auxiliary verb that should be moved to the front of the sentence – not the second. How do children know this? How did they come upon this rule? Besides, what
is
the grammatical rule that governs these transformations? (Think you're so smart? Name it.) This is just one of a host of examples that demonstrate that, as the psycholinguist Steven Pinker argues, “children deserve most of the credit for the language they acquire.” Indeed, Pinker goes further: “we can show that they know things they could not have been taught” (1994: 40). But this immediately raises the question: If children know things about language that they could not have been taught, then where does this knowledge come from? You guessed it: they're born with it. Pinker offers this little analogy: People know how to talk in more or less the sense that spiders know how to spin webs. Web-spinning was not invented by some unsung spider genius and does not depend on having had the right education or on having an aptitude for architecture or the construction trades. Rather, spiders spin spider webs because they have spider brains, which give them the urge to spin and the competence to succeed. (Pinker 1994: 18) According to Pinker, our brains are wired for language. Language is “an instinct.” It's worth repeating, however, that linguists feel driven to this position because children know things that they could not plausibly acquire from their environment. This is why the argument that drives these linguists is referred to as the
poverty of stimulus argument
: the linguistic stimulus to which children are exposed is too impoverished to explain what children know about language. So what does this have to do with morality?
The answer is this: we have a
moral
instinct, just as we have a language instinct. And the reason for believing this is (you see it coming?) a
moral
poverty of stimulus argument: the moral stimulus to which children are exposed is too impoverished to explain what children know about morality. The philosopher and legal scholar John Mikhail (2009) was the first to rigorously push this idea. According to Mikhail, the moral/conventional studies made famous by Turiel offer the most persuasive case for an innate morality. Turiel showed that children as young as 3 make the distinction between moral rules and conventional rules. But, argued Mikhail, to explain children's capacity to make this distinction, we should expect one of three things: (a) children were
trained
by their caregivers to make the distinction; (b) children
learned
to make the distinction by studying their environment; or (c) children develop the capability by way of
innate
processes. (Note the parallel with language. To explain, for example, children's capacity to correctly transform declaratives with embedded auxiliaries into imperatives, we should expect one of three corresponding things: (a) training, (b) learning, or (c) innateness.) Well, it's pretty clear that caregivers do not explicitly train their children in how to make the distinction between moral rules and conventional rules. Most caregivers have probably never even considered this distinction. It also seems implausible that children have learned the distinction by studying their environment. Recall that caregivers make no effort to distinguish moral rules from conventional rules. What the child hears is “Don't do that!” – whether that's throwing food or telling a lie. And it can't be that moral rules are treated as more serious than conventional rules, since the consequences for breaking some conventional rules can be just as severe as (or even
more
severe than) those for breaking moral rules. It may depend on the caregiver; it may depend on the day of the week. This leaves (c), the view that children develop the capability by way of innate processes.
But Turiel's work is not the only evidence cited by supporters of innateness. Children also appear to recognize another sort of distinction. For example, a child is told: “If today is Saturday, then tomorrow must be Sunday.” Suppose that the child is told that tomorrow is
not
Sunday. Could today be Saturday? Children as young as 4 reliably say
No
. But then a child is told: “If Sam goes outside, then Sam must wear a hat.” The child is then told that Sam is not wearing a hat. Could Sam possibly be outside? Despite the identical structure of these two judgments (If
p
is F, then
q must be
G), kids somehow recognize that, yes, Sam
could be
outside. How? Sam is being naughty! The distinction here has to do with conditionals – that is, if/then statements. We call conditionals like “If today is Saturday, then tomorrow must be Sunday”
indicative
since (very roughly) they indicate what is the case. When we say that Sunday must follow Saturday, we're reporting a conceptual necessity. On the other hand, conditionals like “If Sam goes outside, then he must wear a hat” are called
deontic
since (very roughly) they concern
duties
.