Read Is God a Mathematician? Online
Authors: Mario Livio
Hamming’s second potential solution relies on the fact that humans select, and continuously improve the mathematics, to fit a given situation. In other words, Hamming proposes that we are witnessing what we might call an “evolution and natural selection” of mathematical ideas—humans invent a large number of mathematical concepts, and only those that fit are chosen. For years I also used to believe that this was the complete explanation. A similar interpretation was proposed by physics Nobel laureate Steven Weinberg in his book
Dreams of a Final Theory.
Can this be
the
explanation to Wigner’s enigma? There is no doubt that such selection and evolution indeed occur. After sifting through a variety of mathematical formalisms and tools, scientists retain those that work, and they do not hesitate to upgrade them or change them as better ones become available. But even if we accept this idea, why are there mathematical theories that can explain the universe at all?
Hamming’s third point is that our impression of the effectiveness of mathematics may, in fact, be an illusion, since there is much in the world around us that mathematics does not really explain. In support of this perspective I could note, for instance, that the mathematician Israïl Moseevich Gelfand was once quoted as having said: “There is only one thing which is more unreasonable than the unreasonable effectiveness of mathematics in physics, and this is the unreasonable
ineffectiveness
[emphasis added] of mathematics in biology.” I don’t think that this in itself can explain away Wigner’s problem. It is true that unlike in
The Hitchhiker’s Guide to the Galaxy,
we cannot say that the answer to life, the universe, and everything is forty-two. Nevertheless, there is a sufficiently large number of phenomena that mathematics
does
elucidate to warrant an explanation. Moreover, the range of facts and processes that can be interpreted by mathematics continually widens.
Hamming’s fourth explanation is very similar to the one suggested by Atiyah—that “Darwinian evolution would naturally select for survival those competing forms of life which had the best models of reality in their minds—‘best’ meaning best for surviving and propagating.”
Computer scientist Jef Raskin (1943–2005), who started the Macintosh project for Apple Computer, also held related views, with a particular emphasis on the role of logic. Raskin concluded that
human logic was forced on us by the physical world and is therefore consistent with it. Mathematics derives from logic. This is why mathematics is consistent with the physical world. There is no mystery here—though we should not lose our sense of wonder and amazement at the nature of things even as we come to understand them better.
Hamming was less convinced, even by the strength of his own argument. He pointed out that
if you pick 4,000 years for the age of science, generally, then you get an upper bound of 200 generations. Considering the effects of evolution we are looking for via selection of small chance variations, it does not seem to me that evolution can explain more than a small part of the unreasonable effectiveness of mathematics.
Raskin insisted that “the groundwork for mathematics had been laid down long before in our ancestors, probably over millions of generations.” I must say, however, that I do not find this argument particularly convincing. Even if logic had been deeply embedded in our ancestors’ brains, it is difficult to see how this ability could have led to abstract mathematical theories of the subatomic world, such as quantum mechanics, that display stupendous accuracy.
Remarkably, Hamming concluded his article with an admission that “all of the explanations I have given when added together simply are not enough to explain what I set out to account for” (namely, the unreasonable effectiveness of mathematics).
So, should we close by conceding that the effectiveness of mathematics remains as mysterious as it was when we started?
Before giving up, let us try to distill the essence of Wigner’s puzzle by examining what is known as the
scientific method.
Scientists first learn facts about nature through a series of experiments and observations. Those facts are initially used to develop some sort of qualitative models of the phenomena (e.g., the Earth attracts apples; colliding subatomic particles can produce other particles; the universe is expanding; and so on). In many branches of science even the emerging theories may remain nonmathematical. One of the best examples of a powerfully explanatory theory of this type is Darwin’s theory of evolution. Even though natural selection is not based on a mathematical formalism, its success in clarifying the origin of species has been remarkable. In fundamental physics, on the other hand, usually the next step involves attempts to construct mathematical, quantitative theories (e.g., general relativity; quantum electrodynamics; string theory; and so on). Finally, the researchers use those mathematical models to predict new phenomena, new particles, and results of never-before-performed experiments and observations. What puzzled Wigner and Einstein was the incredible success of the last two processes. How is it possible that time after time physicists are able to find mathematical tools that not only explain the existing experimental and observational results, but which also lead to entirely new discernments and new predictions?
I attempt to answer this version of the question by borrowing a beautiful example from mathematician Reuben Hersh. Hersh proposed that in the spirit of the analysis of many such problems in mathematics (and indeed in theoretical physics) one should examine the simplest possible case. Consider the seemingly trivial experiment of putting pebbles into an opaque vase. Suppose you first put in four white pebbles, and later you put in seven black pebbles. At some point in their history, humans learned that for some purposes they could represent a collection of pebbles of any color by an abstract concept that they had invented—a natural number. That is, the collection of white pebbles could be associated with the number 4 (or IIII or IV or whichever symbol was used at the time) and the black pebbles with
the number 7. Via experimentation of the type I have described above, humans also discovered that another invented concept—arithmetic addition—represents correctly the physical act of aggregation. In other words, the result of the abstract process denoted symbolically by 4 7 can predict unambiguously the final number of pebbles in the vase. What does all of this mean? It means that humans have developed an incredible mathematical tool—one that could reliably predict the result of
any
experiment of this type! This tool is actually much less trivial than it might seem, because the same tool, for instance, does not work for drops of water. If you put four separate drops of water into the vase, followed by seven additional drops, you don’t get eleven separate drops of water in the vase. In fact, to make any kind of prediction for similar experiments with liquids (or gases), humans had to invent entirely different concepts (such as weight) and to realize that they have to weigh individually each drop or volume of gas.
The lesson here is clear. The mathematical tools were not chosen arbitrarily, but rather precisely on the basis of their ability to correctly predict the results of the relevant experiments or observations. So at least for this very simple case, their effectiveness was essentially guaranteed. Humans did not have to guess in advance what the correct mathematics would be. Nature afforded them the luxury of trial and error to determine what worked. They also did not have to stick with the same tools for all circumstances. Sometimes the appropriate mathematical formalism for a given problem did not exist, and someone had to invent it (as in the case of Newton inventing calculus, or modern mathematicians inventing various topological/geometric ideas in the context of the current efforts in string theory). In other cases, the formalism had already existed, but someone had to discover that this was a solution awaiting the right problem (as in the case of Einstein using Riemannian geometry, or particle physicists using group theory). The point is that through a burning curiosity, stubborn persistence, creative imagination, and fierce determination, humans were able to find the relevant mathematical formalisms for modeling a large number of physical phenomena.
One characteristic of mathematics that was absolutely crucial for what I dubbed the “passive” effectiveness was its essentially eternal
validity. Euclidean geometry remains as correct today as it was in 300 BC. We understand now that its axioms are not inevitable, and rather than representing absolute truths about space, they represent truths within a particular, human-perceived universe and its associated human-invented formalism. Nevertheless, once we comprehend the more limited context, all the theorems hold true. In other words, branches of mathematics get to be incorporated into larger, more comprehensive branches (e.g., Euclidean geometry is only one possible version of geometry), but the correctness within each branch persists. It is this indefinite longevity that has allowed scientists at any given time to search for adequate mathematical tools in the entire arsenal of developed formalisms.
The simple example of the pebbles in the vase still does not address two elements of Wigner’s enigma. First, there is the question why in some cases do we seem to get more accuracy out of the theory than we have put into it? In the experiment with the pebbles, the accuracy of the “predicted” results (the aggregation of other numbers of pebbles) is not any better than the accuracy of the experiments that had led to the formulation of the “theory” (arithmetic addition) in the first place. On the other hand, in Newton’s theory of gravity, for instance, the accuracy of its predictions proved to far exceed that of the observational results that motivated the theory. Why? A brief re-examination of the history of Newton’s theory may provide some insight.
Ptolemy’s geocentric model reigned supreme for about fifteen centuries. While the model did not claim any universality—the motion of each planet was treated individually—and there was no mention of physical causes (e.g., forces; acceleration), the agreement with observations was reasonable. Nicolaus Copernicus (1473–1543) published his heliocentric model in 1543, and Galileo put it on solid ground, so to speak. Galileo also established the foundations for the laws of motion. But it was Kepler who deduced from observations the first mathematical (albeit only phenomenological) laws of planetary motion. Kepler used a huge body of data left by the astronomer Tycho Brahe to determine the orbit of Mars. He referred to the ensuing hundreds of sheets of calculations as “my warfare with Mars.” Except for two discrepancies, a circular orbit matched all the
observations. Still, Kepler was not satisfied with this solution, and he later described his thought process: “If I had believed that we could ignore these eight minutes [of arc; about a quarter of the diameter of a full moon], I would have patched up my hypothesis…accordingly. Now, since it was not permissible to disregard, those eight minutes alone pointed the path to a complete reformation in astronomy.” The consequences of this meticulousness were dramatic. Kepler inferred that the orbits of the planets are not circular but elliptical, and he formulated two additional, quantitative laws that applied to
all
the planets. When these laws were coupled with Newton’s laws of motion, they served as the basis for Newton’s law of universal gravitation. Recall, however, that along the way Descartes proposed his theory of vortices, in which planets were carried around the Sun by vortices of circularly moving particles. This theory could not get very far, even before Newton showed it to be inconsistent, because Descartes never developed a systematic mathematical treatment of his vortices.
What do we learn from this concise history? There can be no doubt that Newton’s law of gravitation was the work of a genius. But this genius was not operating in a vacuum. Some of the foundations had been painstakingly laid down by previous scientists. As I noted in chapter 4, even much lesser mathematicians than Newton, such as the architect Christopher Wren and the physicist Robert Hooke, independently suggested the inverse square law of attraction. Newton’s greatness showed in his unique ability to put it all together in the form of a unifying theory, and in his insistence on providing a mathematical proof of the consequences of his theory. Why was this formalism as accurate as it was? Partly because it treated the most fundamental problem—the forces between two gravitating bodies and the resulting motion. No other complicating factors were involved. It was for this problem and this problem alone that Newton obtained a complete solution. Hence, the fundamental theory was extremely accurate, but its implications had to undergo continuous refinement. The solar system is composed of more than two bodies. When the effects of the other planets are included (still according to the inverse square law), the orbits are no longer simple ellipses. For instance, the Earth’s orbit is found to slowly change its orientation in space, in a motion known
as
precession,
similar to that exhibited by the axis of a rotating top. In fact, modern studies have shown that, contrary to Laplace’s expectations, the orbits of the planets may eventually even become chaotic. Newton’s fundamental theory itself, of course, was later subsumed by Einstein’s general relativity. And the emergence of that theory also followed a series of false starts and near misses. So the accuracy of a theory cannot be anticipated. The proof of the pudding is in the eating—modifications and amendments continue to be made until the desired accuracy is obtained. Those few cases in which a superior accuracy is achieved in a single step have the appearance of miracles.
There is, clearly, one crucial fact in the background that makes the search for fundamental laws worthwhile. This is the fact that nature has been kind to us by being governed by
universal
laws, rather than by mere parochial bylaws. A hydrogen atom on Earth, at the other edge of the Milky Way galaxy, or even in a galaxy that is ten billion light-years away, behaves in precisely the same manner. And this is true in any direction we look and at any time. Mathematicians and physicists have invented a mathematical term to refer to such properties; they are called
symmetries
and they reflect immunity to changes in location, orientation, or the time you start your clock. If not for these (and other) symmetries, any hope of ever deciphering nature’s grand design would have been lost, since experiments would have had to be continuously repeated in every point in space (if life could emerge at all in such a universe). Another feature of the cosmos that lurks in the background of mathematical theories has become known as
locality
. This reflects our ability to construct the “big picture” like a jigsaw puzzle, starting with a description of the most basic interactions among elementary particles.