His solution was remarkably simple. The only way that both Galileo and Maxwell could be correct at the same time would be if both observers measured the speed of the electromagnetic waves they themselves generated to be the value predicted by Maxwell,
and
if they also measured the speed of the waves generated by their counterpart
also
to be this same speed. Thus, this must be what happens!
This one requirement may not sound strange, but think for a moment about what it suggests. If I watch a child in a car moving past me throw up, I will measure the speed of the vomit with respect to me to be the speed of the car, say, 60 miles per hour, plus the speed of the vomit with respect to the car, say, 5 feet per second. The mother in the front seat of the car, however, will measure the speed of the vomit with respect to her to be just the latter, 5 feet per second. However, if instead of vomiting, the child shines a laser beam on his mother, Einstein tells me the result will be different. Special relativity appears to require that I measure the speed of the light ray with respect to me to be the speed Maxwell calculated,
not
this speed plus 60 miles per hour. Similarly, the child’s mother will also measure the same speed.
The only way this can be possible is if somehow our measurements of space and time “adjust” themselves so that both of us
measure the same speed. After all, speed is measured by determining how far something travels in a fixed time interval. If either the ruler in the car used to measure distance “shrinks” with respect to mine, or the clock that ticks to measure time runs slowly with respect to mine, then it would be possible for both of us to record the same speed for the light ray. In fact, Einstein’s theory says that
both
happen! Moreover, it states that things are perfectly reciprocal. Namely, as far as the woman in the car is concerned, my ruler “shrinks” with respect to hers and my clock runs slow!
These statements sound so absurd that no one believes them on first reading. In fact, it requires a far more detailed analysis to investigate fully all the implications of Einstein’s claim that the speed of light must be measured to be the same for all observers and to sort out all the apparent paradoxes it implies. Among these implications are the
now measured
facts that moving clocks slow down, that moving particles appear more massive, and that the speed of light is the ultimate speed limit—nothing physical can move faster. These follow logically from the first assertion. While Einstein no doubt deserves credit for having the courage and fortitude to follow up on all these consequences, the really difficult task was coming up with his claim about the constancy of light in the first place. It is a testimony to his boldness and creativity
not
that he chose to throw out existing laws that clearly worked, but rather that he found a creative way to live within their framework. So creative, in fact, that it sounds nuts.
In the next chapter I’ll come back to a way of viewing Einstein’s theory so that it appears less crazy. For now, however, I want to leave this as a reminder for anyone who has wanted to use the claim that “they said Einstein was crazy too!” to validate his or her own ideas: What Einstein precisely did
not
do was to claim that
the proven laws of physics that preceded him were wrong. Rather, he showed that they implied something that hadn’t before been appreciated.
The theory of special relativity, along with quantum mechanics, forced revisions in our intuitive picture of reality more profoundly than any other developments in the twentieth century. Between the two of them, they shook the foundations of what we normally consider reasonable by altering how we understand those pillars of our perception: space, time, and matter. To a great extent, the rest of this century has been involved with coming to grips with the implications of these changes. This has sometimes required just as much gumption and adherence to proven physics principles as was required to develop the theories themselves. Here’s a case in point related to the subtle marriage of quantum mechanics with special relativity that I alluded to in my discussion of the Shelter Island meeting: the creation of particle-antiparticle pairs from nothing.
While I have mentioned quantum mechanics several times, I have not yet discussed its tenets in any kind of detail, and there is good reason for this. The road to its discovery was much less direct than that for relativity, and also the phenomena to which it applies—at the realm of atomic and subatomic physics—are less familiar. Nevertheless, as the dust settles, we now recognize that quantum mechanics, too, derives from a single, simply stated assertion—which also seems crazy. If I throw a ball in the air and my dog catches it 20 feet away, I can watch the ball during its travels and check that the trajectory it follows is that predicted by Galilean mechanics. However, as the scale of distances and travel
times gets smaller, this certainty slowly disappears. The laws of quantum mechanics assert that if an object travels from A to B, you cannot assert that it definitely traverses any particular point in between!
One’s natural reaction to this claim is that it is immediately disprovable. I can shine a light on the object and
see
where it goes! If you do shine a “light” between A and B, you can detect the object, say an electron, at some particular point, C, between the two. For example, if a series of electron detectors are set along a line separating A and B, only one of them will click as the particle passes by.
So what happens to the original assertion if I can apparently so easily refute it? Well, nature is subtle. I can surely detect the passage of a particle such as an electron, but I cannot do so with impunity! If, for example, I send a beam of electrons toward a phosphorescent screen, like a TV screen, they will light up areas of the screen as they hit it. I can then put up a barrier on the way to the screen with two nearby narrow slits in the way of the beam, so that the electrons must go through one or the other to make it to the screen. In order to say which one each individual electron goes through, I can set up a detector at each of the slits. The most remarkable thing then happens. If I don’t measure the electrons as they pass through the slits, I see one pattern on the screen. If I measure them one by one, so I can ascertain which trajectory each takes, the pattern I see on the screen changes. Doing the measurement changes the result! Thus, while I can confidently assert that each of the electrons I detect does in fact pass through one of the slits, I cannot from this make any inference about the electrons I
don’t
detect, which clearly have a different behavior.
Behavior such as this is based on the fact that the laws of quantum mechanics require at a certain fundamental level an intrinsic uncertainty in the measurement of natural processes. For example, there is an absolute limit on our ability to measure the position of a moving particle and at the same time to know its speed (and hence where it is going). The more accurately I measure one, the less accurately I can know the other. The act of measurement, because it disturbs a system, changes it. On normal human scales, such disturbances are so small as to go unnoticed. But on the atomic scale, they can become important. Quantum mechanics gets its name because it is based on the idea that energy cannot be transported in arbitrarily small amounts, but instead comes in multiples of some smallest “packet” or
quanta
(from the German). This smallest packet is comparable to the energies of particles in atomic systems, and so when we attempt to measure such particles, we invariably must do so by allowing some signal to be transferred that is of the same order of magnitude as their initial energy. After the transfer, the energy of the system will be changed, and so will the particle motions involved. If I measure a system over a very long period, the average energy of the system will remain fairly constant, even if it changes abruptly from time to time throughout the measurement process. Thus, one arrives at another famous “uncertainty relation”: The more accurately I want to measure the energy of a system, the longer I have to measure it.
These uncertainty relations form the heart of quantum-mechanical behavior. They were first elucidated by the German physicist Werner Heisenberg, one of the founders of the theory of quantum mechanics. Heisenberg, like the other boy wonders involved in the development of this theory during the 1920s and 1930s, was a remarkable physicist. Some of my colleagues insist
that he is second only to Einstein in his impact on physics in this century. Unfortunately, however, Heisenberg’s popular reputation today is somewhat tainted because he remained a prominent scientific figure even during the days of Nazi Germany. It is not at all clear that he overtly supported the Nazi regime or its war effort. But, unlike a number of his colleagues, he did not actively work against it. In any case, his work on quantum mechanics—in particular, his elucidation of the uncertainty principle—changed forever the way we understand the physical world. In addition, probably no physics result in this century has so strongly affected philosophy.
Newtonian mechanics implied complete determinism. The laws of mechanics imply that one could, in principle, completely predict the future behavior of a system of particles (presumably including the particles that make up the human brain) with sufficient knowledge of the positions and motions of all particles at any one time. The uncertainty relations of quantum mechanics suddenly changed all that. If one took a snapshot giving precise information on the positions of all particles in a system, one would risk losing all information about where those particles were going. With this apparent loss in determinism—no longer could one make completely accurate predictions about the future behavior of all systems, even in
principle
—came, at least in many people’s minds, free will.
While the principles of quantum mechanics have excited many nonphysicists, especially philosophers, it is worth noting that all the philosophical implications of quantum mechanics have very little impact whatsoever on physics. All that physicists need to consider are the rules of the game. And the rules are that inherent, and calculable, measurement uncertainties exist in nature. There
are many ways of attempting to describe the origin of these uncertainties, but, as usual, the only completely consistent ones (and there are, as usual, a number of different but equivalent ones) are mathematical. There is one mathematical formulation that is particularly amenable to visualization, and it is due to none other than Richard Feynman.
One of Feynman’s greatest contributions in physics was to reinterpret the laws of quantum mechanics in terms of what is known in mathematical parlance as
path integrals
along the lines of Fermat’s principle for light that I discussed in the last chapter. What started as a “mere” calculational scheme has now influenced the way a whole generation of physicists picture what they are doing. It even introduced a mathematical trick, called “imaginary time,” that Stephen Hawking has alluded to in his popular
A Brief History of Time.
Feynman’s path integrals give rules for calculating physical processes in quantum mechanics, and they go something like this. When a particle moves from point A to point B, imagine all the possible paths it can take:
With each path, one associates a kind of probability that the particle will take it. The tricky part is to calculate the probability associated
with a given path, and that is what all the mathematical tools such as imaginary time are for. But that is not what concerns me here. For macroscopic objects (those large compared to the scale where quantum-mechanical effects turn out to be significant) one finds that one path is overwhelmingly more probable than any other path, and all others can be ignored. That is the path that is predicted by the laws of classical mechanics, and this explains why the observed laws of motion of macroscopic objects are so well described by classical mechanics. But for particles moving on scales where quantum mechanics can make significantly different predictions than classical mechanics, several different paths may be equally probable. In this case, the final probability for a particle to go from A to B will depend on considering more than one possible path. Now this final probability that a particle that starts out at A will end up at B turns out to depend upon the sum of the individual “probabilities” for all the possible paths.
What makes quantum mechanics fundamentally different from classical mechanics is that these “probabilities” as calculated bear one fundamental difference from actual physical probabilities as we normally define them. While normal probabilities are always positive, the “probabilities” for the individual paths in quantum mechanics can be negative, or even “imaginary,” that is, a number whose square is negative! (If you don’t like to think about imaginary probabilities, you can get rid of this feature by imagining a world where “time” is an imaginary number. In this case, all the probabilities can be written as positive numbers. Hence the term
imaginary time.
No further explanation of this issue is necessary here, however. Imaginary time is merely a mathematical construct designed to help us handle the mathematics of quantum mechanics, and nothing more.) There
is no problem calculating the final real physical probability for a particle to go from A to B, because after adding together the individual quantum-mechanical “probabilities” for each path, the laws of quantum mechanics then tell me to square the result in such a way that the actual physical probability is always a positive number.