Read Surfaces and Essences: Analogy as the Fuel and Fire of Thinking Online
Authors: Douglas Hofstadter,Emmanuel Sander
In his protest, Hobbes is a bit like someone who screams in order to praise silence, or like evangelistic television preachers who whip up the masses by speaking of the sins that lead straight to hell while themselves engaging in the very acts of debauchery that they decry. His protest is also reminiscent of a paradoxical phrase that encapsulated the tragedy of the Vietnam war: “We destroyed the village in order to save it.” In short, Hobbes undermines his anti-metaphor credo by expressing it metaphorically.
The eleventh-century Benedictine monk Alberic of Monte Cassino never knew anything approaching the fame of Hobbes, but he too wrote a virulent diatribe against the use of metaphors in his book
The Flowers of Rhetoric.
Here is an excerpt:
Expressing oneself with metaphors has the quality of distracting a person’s attention from the specific qualities of the object being described; in one manner or another, this distracting of attention makes the object resemble something different; it dresses it, if one may put it thusly, in a new wedding dress, and in so dressing it, it suggests that a new kind of nobility has been accorded to the object… Were a meal were served in this fashion, it would disgust and nauseate us, and we would discard it… Consider that in one’s enthusiasm for giving pleasure through delicious novelty, it is unwise to begin by serving up flapdoodle. Be careful, I repeat, when you invite someone in the hopes of giving pleasure, that you not afflict him with so much malaise that he will vomit from it.
As we glide from “dressing an object in a new wedding dress” to “serving up flapdoodle” and “vomiting from it”, we are treated to one metaphor after another in a passage written for no other purpose than to criticize the use of metaphors.
Eight centuries later, Gaston Bachelard, a highly respected French philosopher of science, did not completely avoid the same trap when he wrote: “A science that accepts images is, more than any other, a victim of metaphors. Consequently, the scientific mind must never cease to fight against images, against analogies, against metaphors.” But how can science become a “victim”, and how can a mind, scientific or otherwise, “constantly fight” against anything, unless they do so metaphorically?
And so, are analogies like seductive and dangerous siren songs, likely to lead us astray, or are they more like indispensable searchlights, without which we would be plunged in total darkness? If one never trusted a single analogy, how could one understand anything in this world? What, other than one’s past, can one rely on in grounding decisions that one makes when facing a new situation? And of course all situations
are
in fact new, from the largest and most abstract ones down to the tiniest and most concrete ones. There isn’t a single thought that isn’t deeply and multiply anchored in the past.
To use the elevator in an apartment building that one has never been in before, does one not tacitly depend on the analogy with countless elevators that one has used before? And when one examines this analogy, one sees that, despite its seeming blandness, it depends on numerous others. For example, once you’ve entered the elevator, you have to choose a small button you’ve never seen before, and you have to press it with a certain finger and a certain force, and you do that without thinking about it whatsoever (or more accurately, without noticing that you are thinking about it). This means that you are unconsciously depending on your prior experiences with thousands of buttons in hundreds of elevators (and also buttons on keyboards, stereo systems, dashboards, etc.), and that you are working out the best way to deal with this new button by relying on an analogy between it and your personal category
button
.
And when, after you’ve stepped out of the elevator and are just setting foot in the sixth-floor apartment, you see a big dog coming towards you, how do you deal with this situation if not on the basis of your prior experience with dogs, particularly large dogs? And much the same could be said for when you wash your hands in the sink that you’ve never seen before with soap that you’ve never touched before — not to mention the bathroom door, the doorknob, the electric switch, the faucet, the towel, all never before seen or touched.
And if you go into a grocery store that you’ve never seen before and are looking for the sugar or the olives or the paper towels, where do you go? Which aisle, which shelf, and how high up on the shelf? Without any conscious effort, you recall “the” spot where these articles are found in other familiar stores. Of course you’re not thinking of just
one
place, but of a collection of various places that you mentally superimpose. You think, “The sugar should be around
here”
, where the word “here” refers simultaneously to a collection of small areas in various familiar grocery stores and also to a small area in the new store, and it’s “right there” that one looks first of all.
How mundane is the scene of an employee who, requesting an extra day of vacation, says to her boss, “Last year you offered an extra weekend to Katyanna, so I was wondering if you would be able to give me just one extra day next month…” How could one do anything in life if one felt that it was crucial to be constantly on the alert in order to mercilessly squelch any resemblance that came to mind at any level of abstraction or concreteness? And worse yet, once we’d squelched them all, what would we then do? On what basis would we make even the tiniest decision?
Might there be a rigorous proof that all analogies are dubious? Obviously not, because, as we just saw, everyone depends, without thinking, on a dense avalanche of mini-analogies between everyday things, and these mini-analogies follow on the heels of one another all day long, day in, day out — and seldom do such mundane analogies mislead anyone. Indeed, if they did, we would not be here to tell the tale.
How can computers be so terribly stupid, despite being so blindingly fast and having such huge and infallible memories? Contrariwise, how can human beings be so insightful despite being so limited in speed and having such small and fallible memories? Though perhaps hackneyed, these are reasonable and important questions, focusing as they do on the nearly paradoxical quality of human thought.
Indeed, the human mind, next to a computer, appears fraught with defects of every sort, coming off as hopelessly inferior along most dimensions of comparison. For instance, in carrying out pure reasoning tasks, well-polished computer algorithms reach logically valid conclusions virtually instantly, while people tend to fail most of the time. Much the same can be said about large amounts of knowledge. Where people’s minds are saturated after only a few pieces of information are presented, a computer can take into account a virtually unlimited amount of information. And of course human memory is notoriously unreliable; whereas computers never forget and never distort, those are activities at which we human beings excel, for better or for worse. Three days, three weeks, three months, or three years after we’ve seen a movie or read a book, what details of it remain accessible in our minds? And how distorted are they? We might also mention the speed at which processing takes place in computers as opposed to human brains. What might take us minutes, hours, or far longer can be done by a computer in an infinitesimal eyeblink. Just consider simple arithmetical calculations such as “3 + 5” (a bit under a second for a person), or “27 + 92” (perhaps five or ten seconds), or “27
x
92”, a calculation that most people could not carry out in their heads. Counting the number of words in a selected passage of text and correcting a multiple-choice exam are activities that we humans can carry out, but only with pathetic slowness compared to computers.
Overall, the comparison is extremely lopsided in favor of computers, for, as we just noted, computers carry out flawless reasoning and calculation way beyond human reach, handle unimaginably larger amounts of information than people can handle, do not forget things over short or long time scales, do not distort what they memorize, and carry out their processing at speeds incomparably greater than that of the human mind. In terms of rationality, size, reliability, and speed, the machines we have designed and built beat us hands down. If we then add to the human side of the ledger our easily distractable attention, the fatigue that often seriously interferes with our capacities, and the imprecision of our sensory organs, we are left straggling in the dust. If one were to draw up a table of numerical specifications, as is standardly done in comparing one computer with another,
Homo sapiens sapiens
would wind up in the recycling bin.
Given all this, how can we explain the fact that, in terms of serious thought, machines lag woefully behind us? Why is machine translation so often inept and awkward? Why are robots so primitive? Why is computer vision restricted to the simplest kinds of tasks? Why is it that today’s search engines can instantly search billions of Web sites for passages containing the phrase “in good faith”, yet are incapable of spotting Web sites in which the
idea
of good faith (as opposed to the string of alphanumeric characters) is the central theme?
Readers will of course have anticipated the answer — namely, that our advantage is intimately linked to categorization through analogy, a mental mechanism that lies at the very center of human thought but at the furthest fringes of most attempts to realize artificial cognition. It is only thanks to this mental mechanism that human thoughts, despite their slowness and vagueness, are generally reliable, relevant, and insight-giving, whereas computer “thoughts” (if the word even applies at all) are extremely fragile, brittle, and limited, despite their enormous rapidity and precision.
As soon as categorization enters the scene, the competition with computers takes on a new kind of lopsidedness — but this time greatly in favor of humans. The primordial importance of categorization through analogy in helping living organisms survive becomes obvious if one tries to imagine what it would be like to “perceive” the world in a manner entirely devoid of categories — something like how the world must appear to a newborn, for whom each new concept has to be acquired from scratch and with great difficulty. By contrast, seeing the new in terms of the old and familiar allows one to benefit, and at only a slight cognitive cost, from knowledge previously acquired. Thus, if there were two creatures, one of which (an adult human being) perceived the world using categorization through analogy while the other (a computer) had no such mechanism to help it out, their competition in understanding the world around them would be comparable to a race between a person and a robot to climb up to a high roof, with the human allowed to use a preexistent staircase but with the robot required to construct its own staircase from scratch.
Categorization through analogy drives thinking at all levels, from the smallest to the largest. Consider a conversation in which several hierarchical linguistic levels are continually interacting. First of all, the choice of a specific word will of course determine the sounds that make it up; similarly, when one is typing at one’s keyboard, each word chosen determines the letters composing it, so that they come along automatically rather than being chosen one by one. Analogously, words are often determined by larger structures of which they are but pieces. This happens most clearly whenever one uses a stock phrase (such as “so to speak” or “cut to the chase” or “down to the wire” or “when push comes to shove” or “as easy as stealing candy from a baby”), but it also often happens when no such expression is involved, because one is always working under the constraints of the syntactic and semantic patterns of the language one is speaking, as well as those of one’s own habitual speech patterns.
And the same principle holds at more global levels of speech as well. Thus when one writes or utters a sentence, many of the words comprising it come along without being chosen one by one, since they are all serving a higher-level goal that has been pre-selected. Thus, much as with letters being constrained by a word, the words are in a sense constrained by higher-level thoughts. And then, moving yet further upwards, we can say that the same holds when one is developing an idea; that is, the sentences one produces to express this idea are once again constrained by a yet higher-level structure, even if there is more freedom at this level than at the letter-choice level. And the same holds at the level of the conversation itself, because its overall topic, its tone, the particular people involved in it, and so forth, all constrain the ideas that will be thought of. Of course at this level, there is much more flexibility than at the level of letters composing words. And so, in summary, a conversation constrains the ideas in it, the ideas constrain the sentences, the sentences constrain the phrases, the phrases constrain the words, and finally, the words constrain their letters.
Our claim that choices on each of these levels are carried out by categorization by analogy runs against the naïve image of categories as corresponding, more or less, to single words. To be sure, some categories are indeed named by words, but others are far larger, residing essentially at the level of an entire conversation.
For example, consider arguments about the size of the military budget. Those who advocate a large budget frequently trot out the same old arguments over and over again, based on the vital need to protect our nation against unnamed threats of all sorts, the intense pressure to develop ever newer technologies, the idea that advances in military technology help to drive the civilian marketplace, and so forth. Such a line of reasoning can be spun out over a long time, while always depending on a well-known, even hackneyed, conceptual skeleton that has been “seasoned to taste”, depending on the context, the occasion, and so forth. But whatever the variations on the theme are, it’s always the same conceptual skeleton centered on the need for national defense and for advances in technology. The high-level category determining the overall flow of one’s argument is defined by this conceptual skeleton.