How to Teach Physics to Your Dog (4 page)

Today,
h
is known as Planck’s constant in his honor, and has the value 6.626 × 10
-34
kg m
2
/s (that’s 0.000000000000000 0000000000000000006626 kg m
2
/s). It’s a very small number indeed, but definitely not zero.

Planck’s trick amounts to treating light, which physicists thought of as a continuous wave, as coming in discrete chunks, like particles. Planck’s “oscillators” could only emit light in discrete units of brightness. This is a little like imagining a pond where waves can only be one, two, or three centimeters high, never one and a half or two and a quarter. Everyday waves don’t work that way, but that’s what Planck’s mathematical model requires.

These “oscillators” are also what puts the “quantum” in “quantum physics.” Planck referred to the specific levels of energy in his oscillators as “quanta” (the plural of “quantum,” from the
Latin word for “how much”), so an oscillator at a given frequency might contain one quantum (one unit of energy,
hf
), two quanta, three quanta, and so on, but never one and a half or two and a quarter. The name for the steps stuck, and came to be applied to the entire theory that grew out of Planck’s desperate trick.

Though he’s often given credit for inventing the idea of light quanta, Planck never really believed that light came in discrete quanta, and he always hoped that somebody would find a clever way to derive his formula without resorting to trickery.

The first person to talk seriously about light as a quantum particle was Albert Einstein in 1905, who used it to explain the photoelectric effect. The photoelectric effect is another physical effect that seems like it ought to be simple to describe: when you shine light on a piece of metal, electrons come out. This forms the basis for simple light sensors and motion detectors: light falling on a sensor knocks electrons out of the metal, which then flow through a circuit. When the amount of light hitting the sensor changes, the circuit performs some action, such as turning lights on when it gets dark, or opening doors when a dog passes in front of the sensor.

The photoelectric effect ought to be readily explained by thinking of light as a wave that shakes atoms back and forth until electrons come out, like a dog shaking a bag of treats until they fly all over the kitchen. Unfortunately, the wave model comes out all wrong: it predicts that the energy of the electrons leaving the atoms should depend on the intensity of the light—the brighter the light, the harder the shaking, and the faster the bits flying away should move. In experiments, though, the energy of the electrons doesn’t depend on the intensity at all. Instead, the energy depends on the frequency, which the wave model says shouldn’t matter. At low frequencies, you never get any electrons no matter how hard you shake, while at high frequency, even gentle shaking produces electrons with a good deal of energy.

• • •

“Physicists are silly.”

“I beg your pardon?”

“Well, any dog knows
that
. When you get a bag with treats in it, you always shake it as fast as you can, as hard as you can. That’s how you get the treats out.”

“Yes, well, what can I say? Dogs have an excellent intuitive grasp of quantum theory.”

“Thank you. We’re cute, too.”

“Of course, the point of physics is to understand
why
the treats come out when they do.”

“Maybe for you. For dogs, the point is to get the treats.”

Einstein explained the photoelectric effect by applying Planck’s formula to light itself. Einstein described a beam of light as a stream of little particles, each with an energy equal to Planck’s constant multiplied by the frequency of the light wave (the same rule used for Planck’s “oscillators”). Each photon (the name now given to these particles of light) has a fixed amount of energy it can provide, depending on the frequency; and some minimum amount of energy is required to knock an electron loose. If the energy of a single photon is more than the minimum needed, the electron will be knocked loose, and carry the rest of the photon’s energy with it. The higher the frequency, the higher the single photon energy and the more energy the electrons have when they leave, exactly as the experiments show. If the energy of a single photon is lower than the minimum energy for knocking an electron out, nothing happens, explaining the lack of electrons at low frequencies.
*

Describing light as a particle was a hugely controversial idea in 1905, as it overturned a hundred years’ worth of physics and
requires a very different view of light. Rather than a continuous wave, like water poured into a dog’s bowl, light has to be thought of as a stream of discrete particles, like a scoop of kibble poured into a bowl. And yet each of those particles still has a frequency associated with it, and somehow they add up to give an interference pattern, just like a wave.

Other physicists in 1905 found this deeply troubling, and Einstein’s model took a while to gain acceptance. The American physicist Robert Millikan hated Einstein’s idea, and performed a series of extremely precise photoelectric effect experiments in 1916 hoping to prove Einstein wrong.
*
In fact, his results confirmed Einstein’s predictions, but even that wasn’t enough to get the photon idea accepted. Wide acceptance of the photon picture didn’t come until 1923, when Arthur Holly Compton did a famous series of experiments with X-rays that demonstrated unmistakably particle-like behavior from light: he showed that photons carry momentum, and this momentum is transferred to other particles in collisions.

If you take the Planck formula for the energy of a single photon, and combine it with equations from Einstein’s special relativity, you find that a single photon of light ought to carry a small amount of momentum, given by the formula:

p = h/γ

where
p
is the symbol for momentum and γ is the wavelength of the light.

• • •

“I thought you said there wasn’t any relativity in this book?”

“I said the book isn’t
about
relativity. That’s not the same thing. Some ideas from relativity are important to quantum physics, as well.”

“What’s relativity got to do with this, though?”

“Well, what relativity says is that because a photon has some energy, it must have some momentum, even though it doesn’t have any mass.”

“So . . . it’s an
E = mc
2
thing?”

“Not exactly, but it’s similar. Photons have momentum because of their energy in the same way that objects have energy because of their mass. And nice job dropping an equation in there.”

“Please. Even inferior dogs know
E = mc
2
. And I am an
exceptional
dog.”

A photon with a small wavelength has a lot of momentum, while a photon with a large wavelength has very little. That means that the interaction between a photon of light and a stationary object ought to look just like a collision between two particles: the stationary object gains some energy and momentum, and the moving photon loses some energy and momentum. We don’t notice this because the momentum involved is tiny—Planck’s constant is a very small number—but if we look at an object with a very small mass, like an electron, and photons with a very short wavelength (and thus a relatively high momentum), we can detect the change in momentum.

In 1923, Compton bounced X-rays with an initial wavelength of 0.0709 nanometers
*
off a solid target (X-rays are just light with an exceptionally short wavelength, compared to about 500 nm for visible light). When he looked at the X-rays that scattered off the target, he found that they had longer wavelengths, indicating that they had lost momentum (X-rays bouncing off
at 90 degrees from their original direction had a wavelength of 0.0733 nm, for example). This loss of momentum is exactly what should happen if light is a particle: when an X-ray photon comes in and hits a more or less stationary electron in a target, it gives up some of its momentum to the electron, which starts moving. After the collision, the photon has less momentum, and thus a longer wavelength, exactly as Compton observed.

The amount of momentum lost also depends on the angle at which the photon bounces off—a photon that glances off an electron doesn’t lose very much momentum, while one that bounces almost straight back loses a lot. Compton measured the wavelength at many different angles, and his results exactly fit the theoretical prediction, confirming that the shift was from collisions with electrons, and not some other effect.

Einstein, Millikan, and Compton all won Nobel prizes for demonstrating the particle nature of light. Taken together, Millikan’s photoelectric effect experiments and Compton’s scattering experiments were enough to get most physicists to accept the idea of light as being made up of a stream of particles.
*

As strange as the idea of light as a particle was, though, what came next was even stranger.

INTERFERING ELECTRONS: PARTICLES AS WAVES

Also in 1923, a French Ph.D. student named Louis Victor Pierre Raymond de Broglie
*
made a radical suggestion: he argued that there ought to be symmetry between light and matter, and so a material particle such as an electron ought to have a wavelength. After all, if light waves behave like particles, shouldn’t particles behave like waves?

De Broglie suggested that just as a photon has a momentum determined by its wavelength, a material object like an electron should have a wavelength determined by its momentum:

γ =
h/p

which is just the formula for the momentum of a photon (page 24) turned around to give the wavelength. The idea has a certain mathematical elegance, which was appealing to theoretical physicists even in 1923, but it also seems like patent nonsense—solid objects show no sign of behaving like waves. When de Broglie presented his idea as part of his Ph.D. thesis defense, nobody knew what to make of it. His professors weren’t even sure whether to give him the degree or not, and resorted to showing his thesis to Einstein. Einstein proclaimed it brilliant, and de Broglie got his degree, but his idea of electrons as waves had little support until two experiments in the late 1920s showed incontrovertible proof that electrons behaved like waves.

In 1927, two American physicists, Clinton Davisson and Lester Germer, were bouncing electrons off a surface of nickel, and recording how many bounced off at different angles. They were surprised when their detector picked up a very large number of electrons bouncing off at one particular angle. This mysterious result was eventually explained as the wavelike diffraction of the electrons bouncing off different rows of atoms in their nickel target. The beam of electrons penetrated some distance into the nickel, and part of the beam bounced off the first row of atoms in the nickel crystal,
*
while other parts bounced off the second, and the third, and so on. Electrons reflecting from all these different rows of atoms behaved like waves. The waves that bounced off atoms deeper in the crystal traveled farther on the way out than the ones that bounced off atoms closer to the surface. These waves interfered with one another, like light waves passing through the different slits in Young’s experiment (though with many slits, not just two). Most of the time, the reflected waves were out of phase and canceled one another out. For certain angles, though, the extra distance traveled was exactly right for the waves to add in phase and produce a bright spot, which Davisson and Germer detected as a large increase in the number of electrons reflected at that angle. The de Broglie formula for assigning a wavelength to the electron predicts the Davisson and Germer result perfectly.

Diffraction of electrons off a crystal of nickel. An incoming electron beam (dashed line) passes into a regular crystal of atoms, and bits of the wave (individual electrons) reflect off different atoms in the crystal. Electrons reflected from deeper in the crystal travel a longer distance on the way out (darker line), but for certain angles, that distance is a multiple of a full wavelength, and the waves leaving the crystal add in phase to give the bright spot seen by Davisson and Germer.

• • •

“Wait, how does that work? If there are lots of slits, shouldn’t there be lots of spots?”

“Not really. When you add the waves together, you still get a pattern of bright and dark spots, but as you use more slits, the bright spots get brighter and narrower, and the dark spots get darker and wider.”

“So, if I run through the picket fence to the neighbors’ yard, I’ll get brighter and narrower on the other side?”

“You’d be narrower, all right, but it wouldn’t be a bright idea. The point here is that the ‘slits’ that Davisson and Germer were using were so close together that they could only see one bright spot in the region where they could put their detector. With a different crystal, or faster-moving electrons, they would’ve seen more spots.”

At around the same time, George Paget Thomson at the University of Aberdeen carried out a series of experiments in which he shot beams of electrons at thin films of metal, and observed diffraction patterns in the transmitted electrons (such patterns are produced in essentially the same way as the pattern in the Davisson-Germer experiment). Diffraction patterns like those seen by Davisson and Germer and Thomson are an unmistakable signature of wave behavior, as Thomas Young showed in 1799, so their experiments provided proof that de Broglie was right, and electrons have wave nature. De Broglie won the Nobel Prize in Physics in 1929 for his prediction, and Davisson and Thomson shared a Nobel Prize in 1937 for demonstrating the wave nature of the electron.
*

Other books

The Last Pope by Luís Miguel Rocha
The Dragon's Lair by Elizabeth Haydon
The Young Nightingales by Mary Whistler
Lives of the Circus Animals by Christopher Bram
The Graves at Seven Devils by Peter Brandvold
Before You Know Kindness by Chris Bohjalian


readsbookonline.com Copyright 2016 - 2024