Read Consider the Lobster Online
Authors: David Foster Wallace
1—All right, but how much and how fast?
2—Same thing. Is Hericlitean flux as normal or desirable as gradual change? Do some changes serve the language’s overall pizzazz better than others? And how many people have to deviate from how many conventions before we say the language has actually changed? Fifty percent? Ten percent? Where do you draw the line? Who draws the line?
3—This is an old claim, at least as old as Plato’s
Phaedrus
. And it’s specious. If Derrida and the infamous Deconstructionists have done nothing else, they’ve successfully debunked the idea that speech is language’s primary instantiation.
27
Plus consider the weird arrogance of Gove’s (3) with respect to correctness. Only the most mullah-like Prescriptivists care all that much about spoken English; most Prescriptive usage guides concern Standard
Written
English.
28
4—Fine, but whose usage? Gove’s (4) begs the whole question. What he wants to suggest here, I think, is a reversal of the traditional entailment-relation between abstract rules and concrete usage: instead of usage’s ideally corresponding to a rigid set of regulations, the regulations ought to correspond to the way real people are actually using the language. Again, fine, but which people? Urban Latinos? Boston Brahmins? Rural Midwesterners? Appalachian Neogaelics?
5—
Huh?
If this means what it seems to mean, then it ends up biting Gove’s whole argument in the ass. Principle (5) appears to imply that the correct answer to the above “which people?” is: All of them. And it’s easy to show why this will not stand up as a lexicographical principle. The most obvious problem with it is that not everything can go in The Dictionary. Why not? Well, because you can’t actually observe and record every last bit of every last native speaker’s “language behavior,” and even if you could, the resultant dictionary would weigh four million pounds and need to be updated hourly.
29
The fact is that any real lexicographer is going to have to make choices about what gets in and what doesn’t. And these choices are based on … what? And so we’re right back where we started.
It is true that, as a SNOOT, I am naturally predisposed to look for flaws in Gove et al.’s methodological argument. But these flaws still seem awfully easy to find. Probably the biggest one is that the Descriptivists’ “scientific lexicography”—under which, keep in mind, the ideal English dictionary is basically number-crunching: you somehow observe every linguistic act by every native/naturalized speaker of English and put the sum of all these acts between two covers and call it The Dictionary—involves an incredibly crude and outdated understanding of what
scientific
means. It requires a naive belief in scientific Objectivity, for one thing. Even in the physical sciences, everything from quantum mechanics to Information Theory has shown that an act of observation is itself part of the phenomenon observed and is analytically inseparable from it.
If you remember your old college English classes, there’s an analogy here that points up the trouble scholars get into when they confuse observation with interpretation. It’s the New Critics.
30
Recall their belief that literary criticism was best conceived as a “scientific” endeavor: the critic was a neutral, careful, unbiased, highly trained observer whose job was to find and objectively describe meanings that were right there, literally inside pieces of literature. Whether you know what happened to New Criticism’s reputation depends on whether you took college English after c. 1975; suffice it to say that its star has dimmed. The New Critics had the same basic problem as Gove’s Methodological Descriptivists: they believed that there was such a thing as unbiased observation. And that linguistic meanings could exist “Objectively,” separate from any interpretive act.
The point of the analogy is that claims to Objectivity in language study are now the stuff of jokes and shudders. The positivist assumptions that underlie Methodological Descriptivism have been thoroughly confuted and displaced—in Lit by the rise of post-structuralism, Reader-Response Criticism, and Jaussian Reception Theory, in linguistics by the rise of Pragmatics—and it’s now pretty much universally accepted that (a) meaning is inseparable from some act of interpretation and (b) an act of interpretation is always somewhat biased, i.e., informed by the interpreter’s particular ideology. And the consequence of (a)+(b) is that there’s no way around it—decisions about what to put in The Dictionary and what to exclude are going to be based on a lexicographer’s ideology. And every lexicographer’s got one. To presume that dictionary-making can somehow avoid or transcend ideology is simply to subscribe to a particular ideology, one that might aptly be called Unbelievably Naive Positivism.
There’s an even more important way Descriptivists are wrong in thinking that the scientific method developed for use in chemistry and physics is equally appropriate to the study of language. This one doesn’t depend on stuff about quantum uncertainty or any kind of postmodern relativism. Even if, as a thought experiment, we assume a kind of 19th-century scientific realism—in which, even though some scientists’ interpretations of natural phenomena might be biased,
31
the natural phenomena themselves can be supposed to exist wholly independent of either observation or interpretation—it’s still true that no such realist supposition can be made about “language behavior,” because such behavior is both
human
and fundamentally
normative
.
To understand why this is important, you have only to accept the proposition that language is by its very nature public—i.e., that there is no such thing as a private language
32
—and then to observe the way Descriptivists seem either ignorant of this fact or oblivious to its consequences, as in for example one Dr. Charles Fries’s introduction to an epigone of
Webster’s Third
called
The American College Dictionary:
A dictionary can be an “authority” only in the sense in which a book of chemistry or physics or of botany can be an “authority”—by the accuracy and the completeness of its record of the observed facts of the field examined, in accord with the latest principles and techniques of the particular science.
This is so stupid it practically drools. An “authoritative” physics text presents the results of
physicists’
observations and
physicists’
theories about those observations. If a physics textbook operated on Descriptivist principles, the fact that some Americans believe electricity flows better downhill (based on the observed fact that power lines tend to run high above the homes they serve) would require the Electricity Flows Better Downhill Hypothesis to be included as a “valid” theory in the textbook—just as, for Dr. Fries, if some Americans use
infer
for
imply
or
aspect
for
perspective,
these usages become
ipso facto
“valid” parts of the language. The truth is that structural linguists like Gove and Fries are not scientists at all; they’re pollsters who misconstrue the importance of the “facts” they are recording. It isn’t scientific phenomena they’re observing and tabulating, but rather a set of human behaviors, and a lot of human behaviors are—to be blunt—moronic. Try, for instance, to imagine an “authoritative” ethics textbook whose principles were based on what most people actually
do
.
Grammar and usage conventions are, as it happens, a lot more like ethical principles than like scientific theories. The reason the Descriptivists can’t see this is the same reason they choose to regard the English language as the sum of all English utterances: they confuse mere regularities with
norms
.
Norms aren’t quite the same as rules, but they’re close. A norm can be defined here simply as something that people have agreed on as the optimal way to do things for certain purposes. Let’s keep in mind that language didn’t come into being because our hairy ancestors were sitting around the veldt with nothing better to do. Language was invented to serve certain very specific purposes—“That mushroom is poisonous”; “Knock these two rocks together and you can start a fire”; “This shelter is mine!” and so on. Clearly, as linguistic communities evolve over time, they discover that some ways of using language are better than others—not better
a priori,
but better with respect to the community’s purposes. If we assume that one such purpose might be communicating which kinds of food are safe to eat, then we can see how, for example, a misplaced modifier could violate an important norm: “People who eat that kind of mushroom often get sick” confuses the message’s recipient about whether he’ll get sick only if he eats the mushroom frequently or whether he stands a good chance of getting sick the very first time he eats it. In other words, the fungiphagic community has a vested practical interest in excluding this kind of misplaced modifier from acceptable usage; and, given the purposes the community uses language for, the fact that a certain percentage of tribesmen screw up and use misplaced modifiers to talk about food safety does not
eo ipso
make m.m.’s a good idea.
Maybe now the analogy between usage and ethics is clearer. Just because people sometimes lie, cheat on their taxes, or scream at their kids, this doesn’t mean that they think those things are “good.”
33
The whole point of establishing norms is to help us evaluate our actions (including utterances) according to what we as a community have decided our real interests and purposes are. Granted, this analysis is oversimplified; in practice it’s incredibly hard to arrive at norms and to keep them at least minimally fair or sometimes even to agree on what they are (see e.g. today’s Culture Wars). But the Descriptivists’ assumption that all usage norms are arbitrary and dispensable leads to—well, have a mushroom.
The different connotations of
arbitrary
here are tricky, though—and this sort of segues into the second main kind of Descriptivist argument. There is a sense in which specific linguistic conventions really
are
arbitrary. For instance, there’s no particular metaphysical reason why our word for a four-legged mammal that gives milk and goes moo is
cow
and not, say,
prtlmpf
. The uptown term for this is “the arbitrariness of the linguistic sign,”
34
and it’s used, along with certain principles of cognitive science and generative grammar, in a more philosophically sophisticated version of Descriptivism that holds the conventions of SWE to be more like the niceties of fashion than like actual norms. This “Philosophical Descriptivism” doesn’t care much about dictionaries or method; its target is the standard SNOOT claim that prescriptive rules have their ultimate justification in the community’s need to make its language meaningful and clear.
Steven Pinker’s 1994
The Language Instinct
is a good and fairly literate example of this second kind of Descriptivist argument, which, like the Gove-et-al. version, tends to deploy a jr.-high-filmstrip
SCIENCE: POINTING THE WAY TO A BRIGHTER TOMORROW
- type tone:
[T]he words “rule” and “grammar” have very different meanings to a scientist and a layperson. The rules people learn (or, more likely, fail to learn) in school are called “prescriptive” rules, prescribing how one
ought
to talk. Scientists studying language propose “descriptive” rules, describing how people
do
talk. Prescriptive and descriptive grammar are simply different things.
[35]
The point of this version of Descriptivism is to show that the descriptive rules are more fundamental and way more important than the prescriptive rules. The argument goes like this. An English sentence’s being
meaningful
is not the same as its being
grammatical
. That is, such clearly ill-formed constructions as “Did you seen the car keys of me?” or “The show was looked by many people” are nevertheless comprehensible; the sentences do, more or less, communicate the information they’re trying to get across. Add to this the fact that nobody who isn’t damaged in some profound Oliver Sacksish way actually ever makes these sorts of very deep syntactic errors
36
and you get the basic proposition of N. Chomsky’s generative linguistics, which is that there exists a Universal Grammar beneath and common to all languages, plus that there is probably an actual part of the human brain that’s imprinted with this Universal Grammar the same way birds’ brains are imprinted with Fly South and dogs’ with Sniff Genitals. There’s all kinds of compelling evidence and support for these ideas, not least of which are the advances that linguists and cognitive scientists and AI researchers have been able to make with them, and the theories have a lot of credibility, and they are adduced by the Philosophical Descriptivists to show that since the really
important
rules of language are at birth already hardwired into people’s neocortex, SWE prescriptions against dangling participles or mixed metaphors are basically the linguistic equivalent of whalebone corsets and short forks for salad. As Steven Pinker puts it, “When a scientist considers all the high-tech mental machinery needed to order words into everyday sentences, prescriptive rules are, at best, inconsequential decorations.”
This argument is not the barrel of drugged trout that Methodological Descriptivism was, but it’s still vulnerable to objections. The first one is easy. Even if it’s true that we’re all wired with a Universal Grammar, it doesn’t follow that
all
prescriptive rules are superfluous. Some of these rules really do seem to serve clarity and precision. The injunction against two-way adverbs (“People who eat this often get sick”) is an obvious example, as are rules about other kinds of misplaced modifiers (“There are many reasons why lawyers lie, some better than others”) and about relative pronouns’ proximity to the nouns they modify (“She’s the mother of an infant daughter who works twelve hours a day”).