Read Gödel, Escher, Bach: An Eternal Golden Braid Online
Authors: Douglas R. Hofstadter
Tags: #Computers, #Art, #Classical, #Symmetry, #Bach; Johann Sebastian, #Individual Artists, #Science, #Science & Technology, #Philosophy, #General, #Metamathematics, #Intelligence (AI) & Semantics, #G'odel; Kurt, #Music, #Logic, #Biography & Autobiography, #Mathematics, #Genres & Styles, #Artificial Intelligence, #Escher; M. C
Two other questions come to my mind. The first is: "Could it be that the weather phenomena which we perceive on our scale-a tornado, a drought-are just intermediate-level phenomena: parts of vaster, slower phenomena?" If so, then true high-level weather phenomena would be global, and their time scale would be geological. The Ice Age would be a high-level weather event. The second question is: "Are there intermediate level weather phenomena which have so far escaped human perception, but which, if perceived, could give greater insight into why the weather is as it is?"
From Tornados to Quarks
This last suggestion may sound fanciful, but it is not all that far-fetched.
We need only look to the hardest of the hard sciences-physics-to find peculiar examples of systems which are explained in terms of interacting "parts" which are themselves invisible. In physics, as in any other discipline, a system is a group of interacting parts. In most systems that we know, the parts retain their identities during the interaction, so that we still see the parts inside the system. For example, when a team of football players assembles, the individual players retain their separateness-they do not melt into some composite entity, in which their individuality is lost. Still-and this is important-some processes are going on in their brains which are evoked by the team-context, and which would not go on otherwise, so that in a minor way, the players change identity when they become part of the larger system, the team. This kind of system is called a nearly decomposable system (the term comes from H. A. Simon's article "The Architecture of Complexity"; see the Bibliography). Such a system consists of weakly interacting modules, each of which maintains its own private identity throughout the interaction but by becoming slightly different from how it is when outside of the system,, contributes to the cohesive behavior of the whole system.
The systems studied in physics are usually of this type. For instance, an atom is seen as made of 'a nucleus whose positive charge captures a number of electrons in "orbits", or bound states. The bound electrons are very much like free electrons, despite their being internal to a composite object.
Some systems studied in physics offer a contrast to the relatively straightforward atom. Such systems involve extremely strong interactions, as a result of which the parts are swallowed up into the larger system, and lose some or all of their individuality. An example of this is the nucleus of an atom, which is usually described as being "a collection of protons and
neutrons". But the forces which pull the component particles together are strong that the component particles do not survive to anything like their “free" form (the form they have when outside a nucleus). And in fact a nucleus acts in many ways as a single particle, rather than as a collection of interacting particles. When a nucleus is split. protons and neutrons are ten released. but also other particles.
such as pi-mesons and gamma rays, are commonly produced. Are all those different particles physically present side a nucleus before it is split, or are then just "sparks" which fly off ten the nucleus is split- It is perhaps not meaningful to try to give an answer to such a question. On the level of particle physics, the difference between storing the potential to make "sparks" and storing actual sub particles is not so clear.
A nucleus is thus one systems whose "parts!, even though they are not visible while on the inside, can be pulled out and made risible. However, ere are more pathological cases. such as the proton and neutron seen as stems themselves.
Each of them has been hypothesized to be constituted from a trio of "quarks"-
hypothetical particles which can be combined in twos or threes to make many known fundamental particles. However, the interaction between quarks is so strong that not only can they not he seen [side the proton and neutron, but they cannot even be pulled out at all'. bus, although quarks help to give a theoretical understanding of certain properties of protons and neutrons, their own existence may perhaps ever be independently established. Here see have the antithesis of a nearly decomposable system"-it is a system which, if anything, is "nearly indecomposable", Yet what is curious is that a quark-based theory of rotors and neutrons (and other particles) has considerable explanatory power. in that many experimental results concerning the particles which narks supposedly compose can be accounted for quite well, quantitatively. by using the "quark model".
Superconductivity: A "Paradox" of Renormalization
In Chapter V we discussed how renormalized particles emerge from their bare cores, by recursively compounded interactions with virtual particles. A renormalized particle can be seen either as this complex mathematical construct, or as the single lump which it is, physically. One of the strangest rid most dramatic consequences of this way of describing particles is the explanation it provides for the famous phenomenon of
superconductivity
resistance-free flow of electrons in certain solids, at extremely low temperatures.
It turns out that electrons in solids are renormalized by their interactions with strange quanta of vibration called
phonons
(themselves renormalized as well!). These renormalized electrons are called
polarons
. Calculation shows that at very low temperatures, two oppositely spinning polarons sill begin to attract each other, and can actually become bound together in i certain way. Under the proper conditions. all the current-carrying polar
ons will get paired up, forming Cooper pains. Ironically, this pairing comes about precisely because electrons-the hare cores of the paired polarons--repel each other electrically. In contrast to the electrons, each Cooper pair feels neither attracted to nor repelled by an other Cooper pair, and consequently it can slip freely through a metal as if the metal were a vacuum. If you convert the mathematical description of such a metal from one whose primitive units are polarons into one whose primitive units are Cooper pairs. you get a considerable- simplified set of equations. This mathematical simplicity is the physicist's way of knowing that
"chunking" into Cooper pairs is the natural way to look at superconductivity.
Here we have several levels of particle: the Cooper pair itself: the two oppositely-spinning polarons which compose it: the electrons and phonons which make up the polarons: and then, within the electrons, the virtual photons and positrons, etc. etc. We can look at each level and perceive phenomena there, which are explained by an understanding of the levels below.
"Sealing-off"
Similarly, and fortunately. one does not have to know all about quarks to understand many things about the particles which they may compose. Thus, a nuclear physicist can proceed with theories of nuclei that are based on protons and neutrons, and ignore quark theories and their rivals. The nuclear physicist has a chunked picture of protons and neutrons-a description derived from lower-level theories buf which does not require understanding the lower-level theories.
Likewise, an atomic physicist has a chunked picture of an atomic nucleus derived from nuclear theory. Then a chemist has a chunked picture of the electrons and their orbits, and builds theories of small molecules, theories which can be taken over in a chunked way by the molecular biologist, who has an intuition for how small molecules hang together, but whose technical expertise is in the field of extremely large molecules and how they interact. Then the cell biologist has a chunked picture of the units which the molecular biologist pores over, and tries to use them to account f'or the ways that cells interact. The point is clear. Each level is, in some sense, "sealed off' from the levels below it. This is another of Simon's vivid terms, recalling the way in which a submarine is built in compartments, so that if one part is damaged, and water begins pouring in, the trouble can be prevented from spreading, by closing the doors, thereby sealing off the damaged compartment from neighboring compartments.
Although there is always some "leakage" between the hierarchical levels of science, so that a chemist cannot afford to ignore lower-level physics totally, or a biologist to ignore chemistry totally, there is almost no leakage from one level to a distant level. That is why people earl, have intuitive understandings of other people without necessarily understanding the quark model, the structure of nuclei, the nature of electron orbits,
the chemical bond, the structure of proteins, the organelles in a cell, the methods of intercellular communication, the physiology 'of the various organs of the human body, or the complex interactions among organs. All at a person needs is a chunked model of how the highest level acts; and as all know, such models are very realistic and successful.
The Trade-off between Chunking and Determinism
There is, however, perhaps one significant negative feature of a chunked model: it usually does not have exact predictive power. That is, we save ourselves from the impossible task of seeing people as collections of quarks (or whatever is at the lowest level) by using chunked models: but of course such models only give us probabilistic estimates of how other people feel, wil1 react to what we say or do, and so on. In short, in using chunked high-level models, we sacrifice determinism for simplicity. Despite not being sure how people will react to a joke, we tell it with the expectation at they will do something such as laugh, or not laugh-rather than, say, climb the nearest flagpole. (Zen masters might well do the latter!) A chunked model defines a "space" within which behavior is expected to fall, and specifies probabilities of its falling in different parts of that space.
"Computers Can Only Do What You Tell Them to Do"
Now these ideas can be applied as well to computer programs as to compose physical systems. There is an old saw which says, "Computers can only what you tell them to do." This is right in one sense, but it misses the hint: you don't know in advance the consequences of what you tell a computer to do; therefore its behavior can be as baffling and surprising id unpredictable to you as that of a person. You generally know in advance the space in which the output will fall, but you don't know details of here it will fall. For instance, you might write a program to calculate the first million digits of 7r. Your program will spew forth digits of 7r much faster than you can-but there is no paradox in the fact that the computer outracing its programmer. You know in advance the space in which the output will lie-namely the space of digits between 0 and 9-which is to say, )u have a chunked model of the program's behavior; but if you'd known ie rest, you wouldn't have written the program.
There is another sense in which this old saw is rusty. This involves the ct that as you program in ever higher-level languages, you know less and ss precisely what you've told the computer to do! Layers and layers of translation may separate the "front end" of a complex program from the actual machine language instructions. At the level you think and program, your statements may resemble declaratives and suggestions more than they resemble imperatives or commands. And all the internal rumbling provoked by the input of a high-level statement is invisible to you, generally, just as when you eat a sandwich, you are spared conscious awareness of the digestive processes it triggers In any case, this notion that "computers can only do what they are told to do," first propounded by Lady Lovelace in her famous memoir, is so prevalent and so connected with the notion that "computers cannot think" that we shall return to it in later Chapters when our level of sophistication is greater.
Two Types of System
There is an important division between two types of system built up from many parts. There are those systems in which the behavior of some parts tends to cancel out the behavior of other parts, with the result that it does not matter too much what happens on the low level, because most anything will yield similar highlevel behavior. An example of this kind of system is a container of gas, where all the molecules bump and bang against each other in very complex microscopic ways; but the total outcome, from a macroscopic point of view, is a very calm, stable system with a certain temperature, pressure, and volume. Then there are systems where the effect of a single low-level event may get magnified into an enormous high-level consequence. Such a system is a pinball machine, where the exact angle with which a ball strikes each post is crucial in determining the rest of its descending pathway.
A computer is an elaborate combination of these two types of system. It contains subunits such as wires, which behave in a highly predictable fashion: they conduct electricity according to Ohm's law, a very precise, chunked law which resembles the laws governing gases in containers, since it depends on statistical effects in which billions of random effects cancel each other out, yielding a predictable overall behavior. A computer also contains macroscopic subunits, such as a printer, whose behavior is completely determined by delicate patterns of currents. What the printer prints is not by any means created by a myriad canceling microscopic effects. In fact, in the case of most computer programs, the value of every single bit in the program plays a critical role in the output that gets printed. If any bit were changed, the output would also change drastically.