Read Fear of Physics Online

Authors: Lawrence M. Krauss

Tags: #Science, #Energy, #Mechanics, #General, #Physics

Fear of Physics (29 page)

Shortly after parity violation was discovered, another apparent symmetry of nature was found to be lacking. This is the symmetry between particles and their antiparticles. It had been previously thought, because antiparticles are identical in all ways to their particle partners except for, say, their electric charge, that if we replaced all the particles in the world by their antiparticles, the world would be identical. It is actually not this simple, because some particles with distinct antiparticles are electrically neutral, and can only be distinguished from their antiparticle partners through observing how each decays. In 1964, it was discovered that one such particle, called a neutral Kaon, decayed in a way that could not be reconciled with particle-antiparticle symmetry. Again, it appeared that the weak interaction was the culprit. The strong interaction between the quarks that make up Kaons had
been independently measured to respect the symmetries of parity and particle-antiparticle interchange to high precision.
However, in 1976, Gerard ’t Hooft, in one of his many groundbreaking theoretical discoveries, demonstrated that what has become the accepted theory of the strong interaction, quantum chromodynamics, in fact
should
violate both parity and particle-antiparticle symmetry. Several clever theoretical proposals have been made to reconcile the apparent observed conservation of particle-antiparticleness in the strong interaction with ’t Hooft’s result. To date, we have no idea whether any of them are correct. Perhaps the most exciting involves the possible existence of new elementary particles, called
axions.
If these exist, they could easily be the dark matter that dominates the mass of the universe. Should they be detected as such, we will have made two profound discoveries. We will have learned some fundamental things about microscopic physics, as well as determining what the future evolution of the universe will be. If we do make such a discovery, symmetry considerations will have been the guiding light.
There are other symmetries of nature that exist, or do not exist, for reasons we don’t understand. They form the fodder of modern theoretical research. Such problems prompt the major outstanding questions of elementary-particle physics: Why are there two other distinct sets, or “families” of elementary particles that resemble the familiar particles making up normal matter, except that these other particles are all much heavier? Why are the masses within each family different? Why are the “scales” of the weak interaction and gravity so different? It has become traditional for physicists to frame these questions in terms of symmetry, and it is not unreasonable to expect, based on all of our experience to date, that the answers will be as well.
SIX
IT AIN’T OVER TILL IT’S OVER
We do not claim that the portrait we are making is the whole truth, only that it is a resemblance.
—Victor Hugo,
Les Misérables
There is a scene from a Woody Allen movie I particularly like in which a man obsessed with the meaning of life and death visits his parents, expresses confusion, and cries out for guidance. His father looks up and complains, “Don’t ask me about the meaning of life. I don’t even know how the toaster works!”
Throughout this book I too have stressed, perhaps not as cogently, the strong connection between the sometimes esoteric issues of interest at the cutting edge and in the physics of everyday phenomena. And so it seems appropriate to focus in this final chapter on how this connection is propelling us toward the discoveries-to-be in the twenty-first century. For the ideas I have discussed—going back to those that sprang forth from the small meeting in Shelter Island almost fifty years ago—have revolutionized the relationship between any possible future discoveries and
our existing theories. The result has been perhaps the most profound, and unsung, realignment in our worldview that has taken place during the modern era. Whether or not one thinks there even exists such a thing as the Ultimate Answer still remains largely a matter of personal prejudice. However, modern physics has led us to the threshold of understanding why, at least directly,
it doesn’t really matter.
The central question I want to address here is: What guides our thinking about the future of physics, and why? I have spent the better part of this book describing how physicists have honed their tools to build our current understanding of the world, not least because it is precisely these tools that will guide our current approach to those things we have yet to understand. For this reason the discussion I am about to embark upon takes me full circle, back to approximation and scale. We thus will end where we began.
Physics has a future only to the extent that existing theory is incomplete. To get some insight into this, it is useful to ask what would be the properties of a complete physical theory if we had one. The simplest answer is almost tautological: A theory is complete if all the phenomena it was developed to predict are accurately predicted. But is such a theory necessarily “true,” and, more important, is a true theory necessarily complete? For instance, is Newton’s Law of Gravity true? It does predict with remarkable accuracy the motion of the planets around the sun and of the moon around the Earth. It can be used to weigh the sun to almost one part in a million. Moreover, Newton’s Law is all that is necessary to compute the motion of projectiles near the Earth’s surface to an accuracy of better than 1 part in 100 million. However, we now know that the bending of a light ray near the Earth is twice the amount that one might expect using Newton’s Law.
The correct prediction is obtained instead using general relativity, which generalizes Newton’s Law and reduces to it in cases where the gravitational field is small. Thus, Newton’s Universal Law of Gravity is incomplete. But is it untrue?
The preceding discussion may make the answer seem obvious. After all, one can measure deviations from Newton’s Law. On the other hand, if every observation you are ever likely to make directly in your life is consistent with the predictions of Newton’s Law, for all intents and purposes, it
is
true. To get around this technicality, suppose instead that one defines scientific truth to include only those ideas that are completely in accord with everything we know about the world. Newton’s Law certainly does not meet this criterion. However, until at least the late nineteenth century, it did. Was it true then? Is scientific truth time-dependent?
You might say, especially if you are a lawyer, that my second definition, too, suffers from poor framing. I should remove the words “everything we know” and perhaps replace them with “everything that exists.” The explanation is then watertight. But it is also useless! It becomes philosophy. It cannot be tested. We will never know whether we know everything that exists. All we can ever know is everything we know! This problem is, of course, insurmountable, but it has an important implication that is not often appreciated. It is a fundamental tenet of science that
we can never prove something to be true; we can only prove it to be false.
This is a very important idea, one that is at the basis of all scientific progress. Once we find an example in which a theory that may have worked correctly for millennia no longer agrees with observation, we then know that it must be supplemented—with either new data or a new theory. There is no arguing.
Nevertheless, there is a deeper and, I hope, less semantic issue buried here, and it is the one I want to concentrate on. What does
it mean to say, even in principle, that any theory is
the
correct theory? Consider quantum electrodynamics (QED), the theory that reached completion as a result of the Shelter Island meeting in 1947. Some twenty years earlier, young Dirac had written down his relativistic equation for the quantum-mechanical motion of an electron. This equation, which correctly accounted for everything then known about electrons, presented problems, a number of which the Shelter Island meeting was convened to address, as I described. Nasty mathematical inconsistencies kept cropping up. The work of Feynman, Schwinger, and Tomonaga eventually presented a consistent method for handling these problems and producing meaningful predictions, which agreed completely with all the data. In the decades since the Shelter Island meeting, every measurement that has been made of the interactions of electrons and light has been in complete agreement with the predictions of this theory. In fact, it is the best-tested theory in the world. Theoretical calculations have been compared to ultrasensitive experimental measurements, and the agreement is now better than 9 decimal places in some cases! We could never hope for a more accurate theory than this.
Is QED, then,
the
theory of the interactions of electrons and photons? Of course not. We know, for instance, that if one considers processes at sufficiently high energies involving the heavy W and Z particles, then QED becomes part of a larger theory, the “electroweak” theory. At this stage, QED alone is not complete.
This is not a perverse accident. Even if the W and Z particles did not exist and electromagnetism was the only force we knew about in nature besides gravity, we could not call QED
the
theory of electrons and photons. Because what we have learned in the years following the Shelter Island meeting is that this statement, without further qualification,
makes no sense physically.
The incorporation
of relativity and quantum mechanics, of which QED was the first successful example, has taught us that
every
theory like QED is meaningful only to the extent that we associate a dimensional
scale
with each prediction. For example, it is meaningful to say that QED
is
the theory of electron and photon interactions that take place at a distance of, say, 10

10
centimeters. On such a scale, the W and Z particles do not have a direct effect. This distinction may seem like nitpicking at the moment, but trust me, it isn’t.
In chapter 1, I harped on the necessity of associating dimensions and scale with physical
measurements.
The recognition of the need to associate scales, of length or energy, with physical theory really began in earnest when Hans Bethe made the approximation that allowed him to calculate the Lamb shift five days after the Shelter Island meeting. I remind you that Bethe was able to turn an unmanageable calculation into a prediction using physical reasoning as a basis to ignore effects he did not understand.
Recall what Bethe was up against. Relativity and quantum mechanics imply that particles can spontaneously “pop” out of empty space only to disappear quickly, as long as they do so for too short a time to be directly measured. Nevertheless, the whole point of the Lamb shift calculation was to demonstrate that these particles
can
affect the measurable properties of ordinary particles, such as an electron in a hydrogen atom. The problem, however, was that the effects of all possible virtual particles, with arbitrarily high energies, appeared to make the calculation of the properties of the electron mathematically intractable. Bethe argued that somehow, if the theory were to be sensible, the effect of virtual particles of arbitrarily high energy acting over only very short time intervals should be ignorable. He did not know at the time how to work with the complete theory, so he just threw out
the effects of these high-energy virtual particles and hoped for the best. That is exactly what he got.
When Feynman, Schwinger, and Tomonaga figured out how to handle the complete theory, they found out that the effects of high-energy virtual particles were, indeed, consistently ignorable. The theory gave reasonable answers as any reasonable theory must. After all, if effects on extremely small time and distance scales compared to the atomic scales being measured were to be significant, there would be no hope of doing physics. It is like saying that in order to understand the motion of a baseball, one would have to follow in detail the forces acting at the molecular level during every millionth of a second of its travels.
It has been an implicit part of physics since Galileo that irrelevant information must be discarded, a fact I also stressed in chapter 1. This is true even in very precise calculations. Consider the baseball again. Even if we calculate its motion to the nearest millimeter, we are still making the assumption that we can treat it as a baseball. Actually, it is an amalgam of approximately 10
24
atoms, each of which is performing many complicated vibrations and rotations during the flight of the ball. It is a fundamental property of Newton’s Laws, however, that we can take an arbitrarily complicated object and divide its motion into two pieces: (1) the motion of the “center of mass,” determined by averaging the position of all the individual masses under consideration, and (2) the motion of all the individual objects about the center of mass. Note that the center of mass need not be in a location where any mass actually exists. For example, the center of mass of a doughnut is right in the center, where the hole is! If we were to throw the doughnut in the air, it might twirl and spin in complicated ways, but the movement of the center of mass, the doughnut hole, would follow a simple parabolic motion first elucidated by Galileo.
Thus, when we study the motion of balls or doughnuts according to Newton’s Laws, we are really studying what we now call an effective theory. A more complete theory must be a theory of quarks and electrons, or at least of atoms. But we are able to lump all these irrelevant degrees of freedom into something we call a ball—by which we mean the center of mass of a ball. The laws of motion of all macroscopic objects involve an effective theory of the motion of, and about, their center of masses. The effective theory of a ball’s motion is all we need, and we can do so much with it that we tend to think of it as fundamental. What I will now argue is that all theories of nature, at least the ones that currently describe physics, are of necessity effective theories. Whenever you write one down, you are throwing things out.

Other books

Rise of the Nephilim by Adam Rushing
The Enlightened by Dima Zales
Him Her Them Boxed Set by Elizabeth Lynx
The Desert Prince's Mistress by Sharon Kendrick
Tempted by the Night by Colleen Gleason
Invitation to Ecstasy by Nina Pierce
Woke Up Lonely by Fiona Maazel
I Want to Kill the Dog by Cohen, Richard M.


readsbookonline.com Copyright 2016 - 2024