Read I Am a Strange Loop Online

Authors: Douglas R. Hofstadter

Tags: #Science, #Philosophy

I Am a Strange Loop (18 page)

I begin with the simple fact that living beings, having been shaped by evolution, have survival as their most fundamental, automatic, and built-in goal. To enhance the chances of its survival, any living being must be able to react flexibly to events that take place in its environment. This means it must develop the ability to sense and to categorize, however rudimentarily, the goings-on in its immediate environment (most earthbound beings can pretty safely ignore comets crashing on Jupiter). Once the ability to sense external goings-on has developed, however, there ensues a curious side effect that will have vital and radical consequences. This is the fact that the living being’s ability to sense certain aspects of its environment flips around and endows the being with the ability to sense certain aspects of
itself.

That this flipping-around takes place is not in the least amazing or miraculous; rather, it is a quite unremarkable, indeed trivial, consequence of the being’s ability to perceive. It is no more surprising than the fact that audio feedback can take place or that a TV camera can be pointed at a screen to which its image is being sent. Some people may find the notion of such self-perception peculiar, pointless, or even perverse, but such a prejudice does not make self-perception a complex or subtle idea, let alone paradoxical. After all, in the case of a being struggling to survive, the one thing that is
always
in its environment is… itself. So why, of all things, should the being be perceptually immune to the most salient item in its world? Now
that
would seem perverse!

Such a lacuna would be reminiscent of a language whose vocabulary kept growing and growing yet without ever developing words for such common concepts as are named by the English words “say”, “speak”, “word”, “language”, “understand”, “ask”, “question”, “answer”, “talk”, “converse”, “claim”, “deny”, “argue”, “tell”, “sentence”, “story”, “book”, “read”, “insist”, “describe”, “translate”, “paraphrase”, “repeat”, “lie”, “hedge”, “noun”, “verb”, “tense”, “letter”, “syllable”, “plural”, “meaning”, “grammar”, “emphasize”, “refer”, “pronounce”, “exaggerate”, “bluster”, and so forth. If such a peculiarly self-ignorant language existed, then as it grew in flexibility and sophistication, its speakers would engage ever more in talking, arguing, blustering, and so forth, but without ever referring to these activities, and such entities as questions, answers, and lies would become (even while remaining unnamed) ever more salient and numerous. Like the hobbled formalisms that came out of Bertrand Russell’s timid theory of types, this language would have a gaping hole at its core — the lack of any mechanism for a word or utterance or book (etc.) to refer to itself. Analogously, for a living creature to have evolved rich capabilities of perception and categorization but to be constitutionally incapable of focusing any of that apparatus onto itself would be highly anomalous. Its selective neglect would be pathological, and would threaten its survival.

Varieties of Looping

To be sure, the most primitive living creatures have little or no self-perception. By analogy, we can think of a TV camera rigidly bolted on top of a TV set and facing away from the screen, like a flashlight tightly attached to a miner’s helmet, always pointing away from the miner’s eyes, never into them. In such a TV setup, obviously, a self-turned loop is out of the question. No matter how you turn it, the camera and the TV set turn in synchrony, preventing the closing of a loop.

We next imagine a more “evolved”, hence more flexible, setup; this time the camera, rather than being bolted onto its TV set, is attached to it by a “short leash”. Here, depending on the length and flexibility of the cord, it may be possible for the camera to twist around sufficiently to capture at least part of the TV screen in its viewfinder, giving rise to a truncated corridor. The biological counterpart to feedback of this level of sophistication may be the way our pet animals or even young children are slightly self-aware.

The next stage, obviously, is where the “leash” is sufficiently long and flexible that the video camera can point straight at the center of the screen. This will allow an endless corridor, which is far richer than a truncated one. Even so, the possibility of closing the self-watching loop does not pin down the system’s richness, because there still are many options open. Can the camera tilt or not, and if so, by how much? Can it zoom in or out? Is its image in color, or just in black and white? Can brightness and contrast be tweaked? What degree of resolution does the image have? What percentage of time is spent in self-observation as opposed to observation of the environment? Is there some way for the video camera itself to appear on the screen? And on and on. There are still many parameters to play with, so the potential loop has many open dimensions of sophistication.

Reception versus Perception

Despite the richness afforded by all these options, a self-watching television system will always lack one crucial aspect: the capacity of
perception,
as opposed to mere
reception,
or image-receiving. Perception takes as its starting point some kind of input (possibly but not necessarily a two-dimensional image) composed of a vast number of tiny signals, but then it goes much further, eventually winding up in the selective triggering of a small subset of a large repertoire of dormant
symbols
— discrete structures that have representational quality. That is to say, a symbol inside a cranium, just like a simmball in the hypothetical careenium, should be thought of as a triggerable physical structure that constitutes the brain’s way of implementing a particular
category
or
concept.

I should offer a quick caveat concerning the word “symbol” in this new sense, since the word comes laden with many prior associations, some of which I definitely want to avoid. We often refer to written tokens (letters of the alphabet, numerals, musical notes on paper, Chinese characters, and so forth) as “symbols”. That’s not the meaning I have in mind here. We also sometimes talk of objects in a myth, dream, or allegory (for example, a key, a flame, a ring, a sword, an eagle, a cigar, a tunnel) as being “symbols” standing for something else. This is not the meaning I have in mind, either. The idea I want to convey by the phrase “a symbol in the brain” is that some specific structure inside your cranium (or your careenium, depending on what species you belong to) gets activated whenever you think of, say, the Eiffel Tower. That brain structure, whatever it might be, is what I would call your “Eiffel Tower symbol”.

You also have an “Albert Einstein” symbol, an “Antarctica” symbol, and a “penguin” symbol, the latter being some kind of structure inside your brain that gets triggered when you perceive one or more penguins, or even when you are just thinking about penguins without perceiving any. There are also, in your brain, symbols for action concepts like “kick”, “kiss”, and “kill”, for relational concepts like “before”, “behind”, and “between”, and so on. In this book, then, symbols in a brain are the neurological entities that correspond to concepts, just as genes are the chemical entities that correspond to hereditary traits. Each symbol is dormant most of the time (after all, most of us seldom think about cotton candy, egg-drop soup, St. Thomas Aquinas, Fermat’s last theorem, Jupiter’s Great Red Spot, or dental-floss dispensers), but on the other hand, every symbol in our brain’s repertoire is potentially triggerable at any time.

The passage leading from vast numbers of received
signals
to a handful of triggered
symbols
is a kind of funneling process in which initial input signals are manipulated or “massaged”, the results of which selectively trigger further (
i.e.,
more “internal”) signals, and so forth. This batonpassing by squads of signals traces out an ever-narrowing pathway in the brain, which winds up triggering a small set of symbols whose identities are of course a subtle function of the original input signals.

Thus, to give a hopefully amusing example, myriads of microscopic olfactory twitchings in the nostrils of a voyager walking down an airport concourse can lead, depending on the voyager’s state of hunger and past experiences, to a joint triggering of the two symbols “sweet” and “smell”, or a triggering of the symbols “gooey” and “fattening”, or of the symbols “Cinnabon” and “nearby”, or of the symbols “wafting”, “advertising”, “subliminal”, “sly”, and “gimmick” — or perhaps a triggering of all eleven of these symbols in the brain, in some sequence or other. Each of these examples of symbol-triggering constitutes an act of
perception,
as opposed to the mere
reception
of a gigantic number of microscopic signals arriving from some source, like a million raindrops landing on a roof.

In the interests of clarity, I have painted too simple a picture of the process of perception, for in reality, there is a great deal of two-way flow. Signals don’t propagate solely from the outside inwards, towards symbols; expectations from past experiences simultaneously give rise to signals propagating outwards from certain symbols. There takes place a kind of negotiation between inward-bound and outward-bound signals, and the result is the locking-in of a pathway connecting raw input to symbolic interpretation. This mixture of directions of flow in the brain makes perception a truly complex process. For the present purposes, though, it suffices to say that perception means that, thanks to a rapid two-way flurry of signal-passing, impinging torrents of input signals wind up triggering a small set of symbols, or in less biological words, activating a few concepts.

In summary, the missing ingredient in a video system, no matter how high its visual fidelity, is a
repertoire of symbols
that can be selectively triggered. Only if such a repertoire existed and were accessed could we say that the system was actually
perceiving
anything. Still, nothing prevents us from imagining augmenting a vanilla video system with additional circuitry of great sophistication that supports a cascade of signal-massaging processes that lead toward a repertoire of potentially triggerable symbols. Indeed, thinking about how one might tackle such an engineering challenge is a helpful way of simultaneously envisioning the process of perception in the brain of a living creature and its counterpart in the cognitive system of an artificial mind (or an alien creature, for that matter). However, quite obviously, not all realizations of such an architecture, whether earthbound, alien, or artificial, will possess equally rich repertoires of symbols to be potentially triggered by incoming stimuli. As I have done earlier in this book, I wish once again to consider sliding up the scale of sophistication.

Mosquito Symbols

Suppose we begin with a humble mosquito (not that I know any arrogant ones). What kind of representation of the outside world does such a primitive creature have? In other words, what kind of symbol repertoire is housed inside its brain, available for tapping into by perceptual processes? Does a mosquito even know or believe that there are objects “out there”? Suppose the answer is yes, though I am skeptical about that. Does it assign the objects it registers as such to any kind of categories? Do words like “know” or “believe” apply in any sense to a mosquito?

Let’s be a little more concrete. Does a mosquito (of course without using words) divide the external world up into mental categories like “chair”, “curtain”, “wall”, “ceiling”, “person”, “dog”, “fur”, “leg”, “head”, or “tail”? In other words, does a mosquito’s brain incorporate symbols — discrete triggerable structures — for such relatively high abstractions? This seems pretty unlikely; after all, to do its mosquito thing, a mosquito could do perfectly well without such “intellectual” luxuries. Who cares if I’m biting a dog, a cat, a mouse, or a human — and who cares if it’s an arm, an ear, a tail, or a leg — as long as I’m drawing blood?

What kinds of categories, then, does a mosquito need to have? Something like “potential source of food” (a “goodie”, for short) and “potential place to land” (a “port”, for short) seem about as rich as I expect its category system to be. It may also be dimly aware of something that we humans would call a “potential threat” — a certain kind of rapidly moving shadow or visual contrast (a “baddie”, for short). But then again, “aware”, even with the modifier “dimly”, may be too strong a word. The key issue here is whether a mosquito has
symbols
for such categories, or could instead get away with a simpler type of machinery not involving any kind of perceptual cascade of signals that culminates in the triggering of symbols.

If this talk of bypassing symbols and managing with a very austere substitute for perception strikes you as a bit blurry, then consider the following questions. Is a toilet aware, no matter how slightly, of its water level? Is a thermostat aware, albeit extremely feebly, of the temperature it is controlling? Is a heat-seeking missile aware, be it ever so minimally, of the heat emanating from the airplane that it is pursuing? Is the Exploratorium’s jovially jumping red spot aware, though only terribly rudimentarily, of the people from whom it is forever so gaily darting away? If you answered “no” to these questions, then imagine similarly unaware mechanisms inside a mosquito’s head, enabling it to find blood and to avoid getting bashed, yet to accomplish these feats without using any
ideas.

Mosquito Selves

Having considered mosquito symbols, we now inch closer to the core of our quest. What is the nature of a mosquito’s interiority? That is, what is a mosquito’s experience of “I”-ness? How rich a sense of self is a mosquito endowed with? These questions are very ambitious, so let’s try something a little simpler. Does a mosquito have a visual image of how it looks? I hope you share my skepticism on this score. Does a mosquito know that it has wings or legs or a head? Where on earth would it get ideas like “wings” or “head”? Does it know that it has eyes or a proboscis? The mere suggestion seems ludicrous. How would it ever find such things out? Let’s instead speculate a bit about our mosquito’s knowledge of its own
internal
state. Does it have a sense of being hot or cold? Of being tuckered out or full of pep? Hungry or starved? Happy or sad? Hopeful or frightened? I’m sorry, but even these strike me as lying well beyond the pale, for an entity as humble as a mosquito.

Other books

Nobody's Son by Shae Connor
Blackmailed Into Bed by Heidi Betts
Set On Fire by Strongheart, Yezall
Takedown by Brad Thor
The Unbinding by Walter Kirn
The Conflict by Elisabeth Badinter
The Divorce Party by Laura Dave
Powder Blu by Brandi Johnson


readsbookonline.com Copyright 2016 - 2024