Read Gödel, Escher, Bach: An Eternal Golden Braid Online

Authors: Douglas R. Hofstadter

Tags: #Computers, #Art, #Classical, #Symmetry, #Bach; Johann Sebastian, #Individual Artists, #Science, #Science & Technology, #Philosophy, #General, #Metamathematics, #Intelligence (AI) & Semantics, #G'odel; Kurt, #Music, #Logic, #Biography & Autobiography, #Mathematics, #Genres & Styles, #Artificial Intelligence, #Escher; M. C

Gödel, Escher, Bach: An Eternal Golden Braid (123 page)

It is a common notion that randomness is an indispensable ingredient of creative acts. This may be true, but it does not have any bearing on the mechanizability-or rather, programmability!-of creativity. The world is a giant heap of randomness; when you mirror some of it inside your head, your head's interior absorbs a little of that randomness. The triggering patterns of symbols, therefore, can lead you down the most randomseeming paths, simply because they came from your interactions with a crazy, random world. So it can be with a computer program, too. Randomness is an intrinsic feature of thought, not something which has to be "artificially inseminated", whether through dice, decaying nuclei, random number tables, or what-have-you. It is an insult to human creativity to imply that it relies on such arbitrary sources.

What we see as randomness is often simply an effect of looking at something symmetric through a "skew" filter. An elegant example was provided by Salviati's two ways of looking at the number it/4. Although the decimal expansion of 7r/4 is not literally random, it is as random as one would need for most purposes: it is "pseudorandom".

Mathematics is full of pseudorandomness-plenty enough to supply all would-be creators for all time.

Just as science is permeated with "conceptual revolutions" on all levels at all times, so the thinking of individuals is shot through and through with creative acts. They are not just on the highest plane; they are everywhere. Most of them are small and have been made a million times before-but they are close cousins to the most highly creative and new acts. Computer programs today do not yet seem to produce many small creations. Most of what they do is quite "mechanical" still. That just testifies to the fact that they are not close to simulating the way we think-but they are getting closer.

Perhaps what differentiates highly creative ideas from ordinary ones is some combined sense of beauty, simplicity, and harmony. In fact, I have a favorite "meta-analogy", in which I liken analogies to chords. The idea is simple: superficially similar ideas are often not deeply related; and deeply related ideas are often superficially disparate. The analogy to chords is natural: physically close notes are harmonically distant (e.g.,
E-F-G
); and harmonically close notes are physically distant (e.g.,
G-E-B
).

Ideas that share a conceptual skeleton resonate in a sort of conceptual analogue to harmony; these harmonious "idea-chords" are often widely separated, as measured on an imaginary "keyboard of concepts". Of course, it doesn't suffice to reach wide and plunk down any old way-you may hit a seventh or a ninth! Perhaps the present analogy is like a ninth-chord-wide but dissonant.

Picking up Patterns on All Levels

Bongard problems were chosen as a focus in this Chapter because when you study them, you realize that the elusive sense for patterns which we humans inherit from our genes involves all the mechanisms of representation of knowledge, including nested contexts, conceptual skeletons and conceptual mapping, slippability, descriptions and metadescriptions and their interactions, fission and fusion of symbols, multiple representations (along different dimensions and different levels of abstraction), default expectations, and more.

These days, it is a safe bet that if some program can pick up patterns in one area, it will miss patterns in another area which, to us, are equally obvious. You may remember that I mentioned this back in Chapter 1, saying that machines can be oblivious to repetition, whereas people cannot. For instance, consider
SHRDLU
. If Eta Oin typed the sentence "Pick up a big red block and put it down" over and over again,
SHRDLU
would cheerfully react in the same way over and over again, exactly as an adding machine will print out "4" over and over again, if a human being has the patience to type "2+2" over and over again. Humans aren't like that; if some pattern occurs over and over again, they will pick it up.
SHRDLU
wasn't built with the potential for forming new concepts or recognizing patterns: it had no sense of over and overview.

The Flexibility of Language

SHRDLU
's language-handling capability is immensely flexible-within limits.
SHRDLU

can figure out sentences of great syntactical complexity, or sentences with semantic ambiguities as long as-they can- be resolved by inspecting the data base-but it cannot handle "hazy" language. For instance, consider the sentence "How many blocks go on top of each other to make a steeple?" We understand it immediately, yet it does not make sense if interpreted literally. Nor is it that some idiomatic phrase has been used. "To go on top of each other" is an imprecise phrase which nonetheless gets the desired image across quite well to a human. Few people would be misled into visualizing a paradoxical setup with two blocks each of which is on top of the other-or blocks which are "going"

somewhere or other.

The amazing thing about language is how imprecisely we use it and still manage to get away with it.
SHRDLU
uses words in a "metallic" way, while people use them in a

"spongy" or "rubbery" or even "Nutty-Puttyish" way. If words were nuts and bolts, people could make any bolt fit into any nut: they'd just squish the one into the other, as in some surrealistic

painting where everything goes soft. Language, in human hands, becomes almost like a fluid, despite, the coarse grain of its components.

Recently, Al research in natural language understanding has turned away somewhat from the understanding of single sentences in isolation, and more towards areas such as understanding simple children's stories. Here is a well-known children's joke which illustrates the open-endedness of real-life situations: A man took a ride in an airplane.

Unfortunately, he fell out.

Fortunately, he had a parachute on.

Unfortunately, it didn't work.

Fortunately, there was a haystack below him.

Unfortunately, there was a pitchfork sticking out of it.

Fortunately, he missed the pitchfork.

Unfortunately, he missed the haystack.

It can be extended indefinitely. To represent this silly story in a frame-based system would be extremely complex, involving jointly activating frames for the concepts of man, airplane, exit, parachute, falling, etc., etc.

Intelligence and Emotions

Or consider this tiny yet poignant story:

Margie was holding tightly to the string of her beautiful new balloon. Suddenly, a gust of wind caught it. The wind carried it into a tree. The balloon hit a branch and burst. Margie cried and cried.'

To understand this story, one needs to read many things between the lines. For instance: Margie is a little girl. This is a toy balloon with a string for a child to hold. It may not be beautiful to an adult, but in a child's eye, it is. She is outside. The "it" that the wind caught was the balloon. The wind did not pull Margie along with the balloon; Margie let go. Balloons can break on contact with any sharp point. Once they are broken, they are gone forever. Little children love balloons and can be bitterly disappointed when they break. Margie saw that her balloon was broken. Children cry when they are sad. "To cry and cry" is to cry very long and hard. Margie cried and cried because of her sadness at her balloon's breaking.

This is probably only a small fraction of what is lacking at the surface level. A program must have all this knowledge in order to get at what is going on. And you might object that, even if it "understands" in some intellectual sense what has been said, it will never really understand, until it, too, has cried and cried. And when will a computer do that? This is the kind of humanistic point which Joseph Weizenbaum is concerned with making in his book
Computer Power and Human Reason
, and I think it is an important issue; in fact, a very, very deep issue. Unfortunately, many Al workers at this time are unwilling, for various reasons, to take this sort of point

seriously. taut in some ways, those Al workers are right: it is a little premature to think about computers crying; we must first think about rules for computers to deal with language and other things; in time, we'll find ourselves face to face with the deeper issues.

AI Has Far to Go

Sometimes it seems that there is such a complete absence of rule-governed behavior that human beings just aren't rule-governed. But this is an illusion-a little like thinking that crystals and metals emerge from rigid underlying laws, but that fluids or flowers don't.

We'll come back to this question in the next Chapter.

The process of logic itself working internally in the brain may be more analogous to a succession of operations with symbolic pictures, a sort of abstract analogue of the Chinese alphabet or some Mayan description of events-except that the elements are not merely words but more like sentences or whole stories with linkages between them forming a sort of meta- or super-logic with its own rules.'

It is hard for most specialists to express vividly-perhaps even to remember-what originally sparked them to enter their field. Conversely, someone on the outside may understand a field's special romance and may be able to articulate it precisely. I think that is why this quote from Ulam has appeal for me, because it poetically conveys the strangeness of the enterprise of Al, and yet shows faith in it. And one must run on faith at this point, for there is so far to go!

Ten Questions and Speculations

To conclude this Chapter, I would like to present ten "Questions and Speculations" about Al. I would not make so bold as to call them "Answers"-these are my personal opinions.

They may well change in some ways, as I learn more and as Al develops more. (In what follows, the term "Al program" means a program which is far ahead of today's programs; it means an "Actually Intelligent" program. Also, the words "program" and "computer"

probably carry overly mechanistic connotations, but let us stick with them anyway.) Question: Will a computer program ever write beautiful music?

Speculation: Yes, but not soon. Music is a language of emotions, and until programs have emotions as complex as ours, there is no way a program will write anything beautiful. There can be "forgeries” shallow imitations of the syntax of earlier music-but despite what one might think at first, there is much more to musical expression than can be captured in syntactical rules. There will be no new kinds of beauty turned up for a long time by computer music-composing programs. Let me carry this thought a little further. To think-and I have heard this suggested-that we might soon be able to command a preprogrammed mass-produced mail-order twenty-dollar desk-model "music box" to bring forth from its sterile circuitry pieces which Chopin or Bach might have written had they lived longer is a grotesque and shameful misestimation of the depth of the human spirit. A "program" which could produce music as they did would have to wander around the world on its own, fighting its way through the maze of life and feeling every moment of it. It would have to understand the joy and loneliness of a chilly night wind, the longing for a cherished hand, the inaccessibility of a distant town, the heartbreak and regeneration after a human death. It would have to have known resignation and worldweariness, grief and despair, determination and victory, piety and awe. In it would have had to commingle such opposites as hope and fear, anguish and jubilation, serenity and suspense. Part and parcel of it would have to be a sense of grace, humor, rhythm, a sense of the unexpected-and of course an exquisite awareness of the magic of fresh creation. Therein, and therein only, lie the sources of meaning in music.

Question: Will emotions be explicitly programmed into a machine?

Speculation: No. That is ridiculous. Any direct simulation of emotions-PARRY, for example-cannot approach the complexity of human emotions, which arise indirectly from the organization of our minds. Programs or machines will acquire emotions in the same way: as by-products of their structure, of the way in which they are organized-not by direct programming. Thus, for example, nobody will write a "falling-in-love" subroutine, any more than they would write a "mistake-making" subroutine. "Falling in love" is a description which we attach to a complex process of a complex system; there need be no single module inside the system which is solely responsible for it, however!

Question: Will a thinking computer be able to add fast?

Speculation: Perhaps not. We ourselves are composed of hardware which does fancy calculations but that doesn't mean that our symbol level, where "we" are, knows how to carry out the same fancy calculations. Let me put it this way: there's no way that you can load numbers into your own neurons to add up your grocery bill.

Luckily for you, your symbol level (i.e., you) can't gain access to the neurons which are doing your thinking-otherwise you'd get addle-brained. To paraphrase Descartes again:

"I think; therefore I have no access

to the level where I sum."

Why should it not be the same for an intelligent program? It mustn't be allowed to gain access to the circuits which are doing its thinking otherwise it'll get addle-CPU'd.

Other books

Heat Lightning by John Sandford
The Reunion Show by Brenda Hampton
The Vow by Lindsay Chase
The Green Book by Jill Paton Walsh


readsbookonline.com Copyright 2016 - 2024