Read The Universal Sense Online

Authors: Seth Horowitz

The Universal Sense (3 page)

The impact probably also took a lot of the atmosphere away with it, which would have quieted things down a bit on anything but a seismic level. But the bombardments continued, each impactor bringing not just more rock and metal but a new load of volatile ices and frozen gases, creating a new atmosphere, delivering the water needed to cool the Earth, and bringing a new sound—the sound of rain, condensing out of the massive amounts of water vapor in the second atmosphere and forming the Earth’s seas.

The Earth was growing loud again, with the sounds of water added to the volcanic purges and exploding meteorites. The sounds spread not only as high-speed shear waves through newly cooled rock, but also as spherical rumbles through the atmosphere and cylindrically spreading growls through the new oceans. The Earth created its own soundtrack, albeit one formed almost exclusively from what we would call noise, an almost equal (or at least very broad) distribution of energy across the entire acoustic range.

A few hundred million years later, during what is called the Late Heavy Bombardment, the soundtrack surged again as Earth (and everywhere else in the solar system) became a target for a rain of yet more asteroids. But during this time, an odd thing happened: somewhere a small subset of the churning organic chemicals in the oceans (or deep beneath them, near hydrothermal vents, according to some theories) started to replicate themselves. Amidst all this noise and vibration, life was born. The earliest life-forms for which we have any fossil evidence—the
great green-blue algae stromatolite mats, billions of years from anything that could actively listen—were churned about by waves and vibrations in the primordial seas, exposing fresh surface area to methane to create the oxygen in the atmosphere we need to breathe. But as these early forms quietly poisoned themselves almost out of existence by oxygenating the atmosphere over the next 2 billion years, they set the stage for their successors, the eukaryotes.

The earliest non-autotrophic eukaryotic life-forms—meaning those that didn’t just sit and photosynthesize—started experimenting with the externalization of proteins similar to those that made up their internal cytoskeleton. These proteins formed self-assembling chains that made small mobile hairs, known as cilia, that let these organisms move around. And in moving around, they found more food. This change transformed life-forms from passive to active players in the nascent ecology, allowing the evolution of the first predators.

Soon a variant on these cilia emerged with a different function. Rather than moving like oarsmen to propel single-celled animals, this new type of cilia, called primary cilia, served to open or close small channels in the cell membrane from which they projected. When the cilia were bent in one direction, they would open the channel; bending them in the other direction would close the channel. This changed them into sensors, detecting motion in the fluids around them. The organisms that first developed these were simple, but they had taken the first real leap in the sensory world. They were the first to sense and use vibration as a means of detecting changes in their environment—not just sensing changes in the fluid flow that might help them move from a food-sparse location to a food-rich one, but actively picking up on motion farther away that
might indicate prey. Vibration sensitivity was one of the first telesensory systems, one able to detect changes in the environment at a distance rather than directly on or adjacent to the cell surface.

It would be tempting to say that these cilia are the ancestors of the tiny hairs that detect vibration in our ears today. But evolution is messier than that. Vibration sensitivity is indeed based on the displacement of hair-like cilia, but the evolutionary road to the hair cells that fill the modern vertebrate inner ear does not begin at the earliest flagella that spun single cells around the Archaean seas. Instead it starts with the emergence of mechanosensory neurons in multicellular organisms, probably similar to ancestral jellyfish, about a billion and a half years ago. It was something on the order of another 400–500 million years before what looked like modern sensory hair cells would emerge, and another billion years or so before these hair cells organized into dedicated sensory organs to detect motions in fluid in our early vertebrate ancestors—basic inner ears.

Take a second and think about what an ear is: an organ that senses the changes in pressure of molecules. We tend to imagine ears hearing music or car horns, but what they are really noticing is vibration. Early vertebrates used vibration sensitivity for two different purposes. One was to monitor changes in fluid flow right around their bodies, using what is called a
lateral line
system, still found in almost all fish and larval amphibians still around today. The second was used to monitor shifts in
internal
fluid flow in specialized organs located on each side of their heads. These structures had no specialized organs for picking up airborne sounds, since back then everyone still lived in the seas, but they were used to detect angular and linear acceleration of the animal’s head. These organs, called the
semicircular canals
and
otolith
organs, respectively, were internal vibration sensors that measured the acceleration of the animal’s head as it moved. Even the earliest vertebrate fossils with inner ears (
Sibyrhynchus denisoni
, a particularly weird-looking relative of the shark) show these structures. These organs formed the basis of the vestibular system—an acceleration-sensing system tightly synchronized with most other senses and the musculoskeletal system that lets the animal move in a coordinated fashion and fight against the pull of gravity. But for
Sibyrhynchus denisoni
and its fellow early vertebrates, it was also the beginning of hearing and listening. The saccule, the otolith organ that sensed the direction of gravity, would also vibrate in responses to pressure changes in the water. In other words, the ear had come into existence, and living things began listening.

Hearing in these early vertebrates was probably relatively limited compared to many of the examples that are around today—after all, we’ve had over 350 million years to play with variations on the theme. Descendants of
S. denisoni
, contemporary sharks, have relatively high auditory thresholds, that is, the sound has to be pretty loud for them to respond to it. They also have very limited ability to localize sounds underwater. This is endemic to listening under water—the speed of sound in water is about five times its speed in air, due to water’s greater density, and so it is difficult to figure out where the sound is coming from based on differences in the sound on one side of the head compared to the other side. But they had other senses to help out—not only vision (which is relatively limited in sharks as well), smell, and electrosensation (which allows contemporary sharks to pick up on the neuromuscular responses of prey swimming nearby), but their lateral line system as well. All these senses were coordinated to form a sensory world. And right
there is the first watershed that separates the world of sound before life developed from what it became after life emerged: the need to map perceptions onto the brain from the senses measuring vibration, photons, acceleration, and chemicals.

What we think of as sound is split between two factors, physics and psychology. The physics comes into play when trying to describe the parameters of a sound—its frequency (how many times the medium vibrates per second), its amplitude (the difference between the highest and lowest pressure peaks in a given vibration), and its phase (the relative point in time since the vibration began). In theory, if you could completely characterize just these three factors, you could completely describe a sound. That may sound simple enough, but outside of an acoustics laboratory, sound is much more complicated. It’s pretty rare to find an isolated sound generator attached to a calibrated amplifier and speaker dead ahead of you on your daily commute, and if you did, you would be more likely to call a bomb squad than use it as a useful environmental signal to help you cross the street. Acousticians often use these simple, very controlled sounds as the basis for describing what sound does and how it works, but using them to describe sound in the real world is sort of like asking a physicist to describe the motion of a herd of cows. The physicist can model the herd’s behavior perfectly, with the proviso that these are spherical cows moving on a frictionless surface in a vacuum.

As with the cows, the real world is much messier acoustically, especially in the primordial seas where the first listeners were born. While the physical aspects of sounds are characterized by their frequency, amplitude, and timing or phase, sound in the real world changes radically based on the details of the environment, and the devil is in the details. Sounds in the environment
are emitted by almost anything that interacts with anything else, and once emitted, are affected by almost everything in that environment, until the energy lost in those interactions attenuates the sound into background noise. With enough patience, equipment, and computational power (and a large enough budget), you could do a reasonably good job of modeling what would happen to that sound—and in fact this is the basis of a great deal of the recording industry’s post-production work. It’s also a large portion of what the vertebrate brain does, integrating all the physics based on sensory transduction and assembling it into a perceptual model of the world outside so that it can be acted upon. Behavioral action and its underlying causes are the basis of psychology. Physics goes on whether there is a listener or not—all trees falling in all forests make sounds, regardless of who is present. But once a listener (or viewer or smeller) enters the world, everything changes. The physics of the world is separate from the psychology at the level of sensors and neurons, which is why we need a new term to describe it—psychophysics.

We think that by observing the world around us, we are actually seeing or hearing or tasting or touching what is going on, but we are not. We are interpreting a representation of the world, created by remapping a form of energy into a usable signal. All sensory input—no matter which form of energy it uses—is remapped. The initial energy of any stimulus, such as sight, smell, or sound, causes some change in the receiver, which is then transduced into a different form and passed along as
sensation
.
Perception
is the integration of sensations into a coherent model of the changes in energy that surround us. An atom of perception is the remapping of a single event in the physical world from a single type of sensor.

When you add up all these individual percepts, what you get is the
umwelt
, the world we build from our senses. For example, color is the psychophysical remapping of the wavelength of light, with brightness a remapping of the amplitude of light (the number of photons received). Touch—whether light pressure or deep pressure—is a remapping of the mechanical distortion of a structure. Smell is the result of the binding of specific chemicals. Sounds are psychophysical remappings of vibratory signals—changes in pressure in some medium such as air, water, dirt, or rock.

By becoming a listener—a receiver of auditory information, rather than just being acted upon by surrounding vibrations, as our stromatolite ancestors were—an organism becomes an active participant in the transfer and detection of acoustic energy. Early organisms developed a need to map the energy of sound via biological transduction, to create nerve impulses that stood for the sounds, sometimes even mimicking them (as in cochlear microphonics), but never with a 1:1 map. The reliance on the temporally fuzzy biological system of the brain means that you have to remap the energy of sound onto its representation in the perception via sensation.

But psychophysics is a
personal
remapping of physics, one that changes across evolution and development. At an evolutionary or species level, it’s why you can sit on your porch on a summer night and comment on how quiet the country is, while a dozen bats are flitting over you screaming at subway-train loudness levels while hunting bugs, but at frequencies you can’t hear. It’s why elephants in urban zoos don’t do well when they are located near highways—because the low-frequency rumble of automobile traffic from a kilometer away interferes with their
infrasonic communication. At a developmental or individual level, it’s why teenagers are drawn to loud music as a stimulant, and why an elderly person with normal age-related hearing loss can seem paranoid as his or her ability to monitor the environment decreases. The birth of psychophysics was the first step in the emergence of mind.

Shortly after life became complex enough to need psychophysics, it started doing something interesting: it began contributing to the sounds around it. Before life, all sounds on Earth—crashes of waves, susurrations of the wind, crackles of lightning—were noisy in the sense that they provided a a constant flow of acoustic energy scattered almost randomly across the spectrum (with the exception of the occasional low moaning sounds of wind across hollow stones or the brief tone of singing sands blowing across dunes). But with the presence of increasingly complex multicellular life and the development of listeners, the sounds of the Earth began to change. Vibration sensitivity arose because of an evolutionary rule of thumb: whenever there is a niche rich in resources, something will emerge to fill it. Early vibration sensitivity arose as an early warning system telling of changes in water currents around simple organisms. Shifts in these local water currents could mean anything from a wave passing by to the approach of a predator or the presence of prey organisms nearby. But once organisms grew complex enough to actually listen to their environment, there was an entire sensory niche open for exploitation. Animals could
make
sounds. This was a leap in animal behavioral complexity. Unlike vision, which relies on passive detection of light energy, usually from sunlight, sound provides a whole new communication channel, one capable of operating in the dark, around corners, and without being dependent on line of
sight.
1
Sound was suddenly not just an early warning system, but used as an active way of coordinating behaviors across long distances between and within species.

Other books

The Man in Lower Ten by Mary Roberts Rinehart
Waging War by April White
The Italian by Lisa Marie Rice
Show-Jumping Dreams by Sue Bentley
Beat the Turtle Drum by Constance C. Greene
Ophelia by D.S.
Lancelot and the Wolf by Sarah Luddington


readsbookonline.com Copyright 2016 - 2024