The Pleasure Instinct: Why We Crave Adventure, Chocolate, Pheromones, and Music (15 page)

Babies as young as four months old show a stable preference for music containing consonant rather than dissonant intervals (an interval is a sequence of two tones). They also discriminate two melodies apart more easily if both have a consonant interval structure rather than a dissonant structure. Consonant intervals are those where the pitches (the fundamental frequency) of the constituent tones are related by small integer ratios. For example, intervals such as the “perfect fifth,” with a pitch difference of seven semitones, or the “perfect fourth,” with a pitch difference of five semitones, have very small integer ratios of 3:2 and 4:3, respectively. Adult listeners from all cultures find these intervals pleasant-sounding, and babies love them. Both adults and four-month-olds prefer these consonant intervals to dissonant intervals such as the tritone, with a pitch difference of six semitones and a large pitch ratio of 45:32. Infants listen contentedly to melodies composed of consonant intervals but show signs of distress when some of the intervals are replaced by dissonant intervals.This effect has been observed in many cultures and in infants with varying levels of music exposure. Hence it appears to result from an innate predisposition toward certain acoustic features that are pleasurable and indeed seem to be shared by most systems of music.
Another interesting feature of auditory processing that is present in infants is their ability to detect transpositions of diatonic melodies across pitch and tempo. Both infants and adults can recognize a tune based on the diatonic scale as the same when it is transposed across pitch, but fail to do so when the melody comes from a nondiatonic scale. Since all primates perceive tones separated by an equal octave as having the same pitch quality, one might predict that the ability to detect transposition of diatonic melodies is also present in our hairier cousins.To date, this experiment has only been performed with rhesus monkeys and, as expected, they exhibit the same effect as human adults and infants. The presence of similar auditory preferences and perceptual abilities among adult listeners and infants from different cultures suggests that certain features that are critical components of music competence exist at birth.
Eenie-Meenie-Miney-Mo
That certain auditory biases exist at birth is probably not news to parents. Even those who are uninitiated to this phenomenon learn quickly that their prelinguistic newborn is a capable communicator. Infants communicate with emotional expression, and parents use this to gauge what their child needs. Few stimuli calm an infant and get their attention more effectively than the lullaby sung in a soothing voice. As we saw in earlier chapters, infants recognize their mother’s voice from birth, and are calmed when they hear it. Experiments have shown that newborns and infants are highly sensitive to the prosodic cues of speech, which tend to convey the emotional tone of the message. Prosody is exaggerated even more so in the typical singsong style of motherese that dominates parent-infant dialogue during the first year of life.The infant trains its parents in motherese by responding positively to certain acoustic features they provide over others. Motherese and lullabies have so many acoustic properties in common—such as simple pitch contours, broad pitch range, and syllable repetition—that theorists have argued them to be of the same music genre.
Just as motherese shows up with the same acoustic properties in virtually every culture, so too does the lullaby. Practically everyone agrees on what is and is not a lullaby. Naive listeners can distinguish foreign lullabies from nonlullabies that stem from the same culture of origin and use the same tempo. Of course, infants make the distinction quite readily. Even neonates prefer the lullaby rendition of a song to the nonlullaby rendition performed by the same singer. Although it is tempting to attribute such preferences to experience, studies have shown that hearing infants raised by deaf parents who communicate only by sign language show comparable biases. It appears, then, that from our very first breath, we carry a set of inborn predispositions that make us seek out specific auditory stimuli. These stimuli are common across cultures and appear in many forms of music but are exemplified in the lullaby. Why should this be the case? One might argue that these acoustic features help foster mother-infant communication, but this just passes the question along without really answering it. Why do these specific acoustic properties show up in motherese and the lullaby? They arise because the infant trains his or her parents to provide these stimuli through feedback in the form of emotional expressions of approval and calm.The real question is why these types of auditory experiences pacify and bring pleasure to infants (and adults).
 
 
In April 2003, scientists from the University of California at San Francisco discovered that newborn rats fail to develop a normal auditory cortex when reared in an environment that consists of continuous white noise. The hallmark of white noise is that it has no structured sound—every sound wave frequency is represented equally. Neurobiologist Michael Merzenich and his student Edward Chang wanted to understand how the environmental noise that we experience every day influences the development of hearing disorders in children. They speculated that perhaps the increase in noise in urban centers over the past several decades might be responsible for the concomitant increase in language impairment and auditory developmental disorders observed in children over the same period.
Their experiment began by raising rats in an environment of continuous white noise that was loud enough to mask any other sound sources, but not loud enough to produce any peripheral damage to the rats’ ears or auditory nerves. After several months, the scientists tested how well the auditory cortex of the rats responded to a variety of sounds. They found significant structural and physiological abnormalities in the auditory cortex of the noise-reared rats when compared to rats raised in a normal acoustic environment. Interestingly, the abnormalities persisted long after the experiment ended, but when the noise-reared rats were later exposed to repetitious and highly structured sounds—such as music—their auditory cortex rewired and they regained most of the anatomical and physiological markers that were observed in normal rats.
This finding created a wave of excitement throughout the scientific community because it clearly showed the importance of experience in influencing normal brain development. The developing auditory cortex of all mammals is an experience-expectant organ, requiring specific acoustic experiences to ensure that it is wired properly. As Chang summarized, “It’s like the brain is waiting for some clearly patterned sounds in order to continue its development. And when it finally gets them, it is heavily influenced by them, even when the animal is physically older.”
 
 
The auditory cortex of rats and humans—indeed, all mammals—progresses through a very specific set of timed developmental changes. As we have seen in the other sensory systems, this development depends on genes to program the overall structure, but requires the organism to experience environmentally relevant stimuli at specific times to fine-tune the system and trigger the continued developmental progression. Genes don’t just magically turn on. In most cases they wait for an internal or environmental promoter to trigger their expression. And the details of development are not in the genes but rather in the patterns of gene expression.
The primate auditory system develops a bit differently from the sensory systems of touch, smell, and taste that we have considered thus far. The peripheral anatomical structures of the auditory system begin to form very early in development, yet the system matures rather slowly as a whole. For instance, by the time Kai had been in Melissa’s womb for about four weeks, he already had the beginnings of ears on either side of his embryonic head. Cells were also forming in what will become his cochlea, the shell-shaped organs in each ear that transduce acoustic sound waves into electrical impulses that the brain uses to communicate. By about the twenty-fifth week of gestation, Kai had most auditory brain-stem nuclei in place that will be used to process features of acoustic information such as sound localization and pitch discrimination. But these cells depend on stimulation for continued growth, maturation, and being able to form synaptic connections with their higher cortical target sites.
It is probably no surprise to readers by now that it is precisely at this time—when the brain most needs auditory stimulation—that fetuses begin to hear their first sounds. We know this for two reasons. First, it is at this age that fetuses first show signs of what is called an auditory-evoked potential. Preterm babies are given a battery of tests. Among these is a painless test that involves placing a small headphone over their ears and attaching three electrodes to their scalp to measure their brain’s response to auditory stimuli. When a brief clicking noise is played, preterm babies younger than about twenty-seven weeks show little or no electrical response following the stimulus—their brain is not mature enough to register the sound, and they show no sign of hearing it. It’s not until after twenty-seven weeks or so that preterm infants show the first evidence of a brain response to auditory stimuli, and not so coincidentally, the first signs of actually hearing sounds.
These results are consistent with observations using ultrasound technology to monitor fetal movements in response to tones played on their mother’s stomach. At Kai’s sixteen-week ultrasound, he showed no response to auditory stimulation in the form of tones played near Melissa’s stomach, or either of our voices.The story had changed by his thirty-week ultrasound. Not only did he appear less embryonic, he also altered his movements whenever we made a loud noise.The most reliable change was a complete halt of his ongoing movement when his mother spoke. My paternal observations are consistent with real experiments showing that fetuses start and stop moving in response to auditory stimuli, and even blink their eyes in reaction to loud sounds heard in the womb.
Throughout the last trimester, Kai’s brain was taking in sounds and using them to stabilize and fine-tune his developing auditory system.Although many sounds can pass through to the womb, he was especially sensitive to those that changed with dramatic pitch contours. This is because even fetal brains show adaptation to unchanging stimuli. A tone that is repeatedly played at the same pitch and amplification is responded to fully at first, but becomes less and less interesting over time. This is mirrored by physiological responses measured from the brain such as auditory-evoked potentials. Evoked potentials become smaller and smaller in preterm babies if the same old boring stimulus is played over and over again.The brain simply begins to habituate, and the stimulus becomes less salient.
Continuous and slowly changing sounds—those that exhibit exaggerated pitch contour and wide pitch variation (exactly like those heard in motherese and in lullabies)—keep the baby and its brain in an attendant state. Fetuses show far less behavioral habituation to music that sounds like motherese than to repetitive tones of the same exact pitch. Likewise, preterm infants older than thirty weeks do not exhibit a decline in their auditory-evoked potential if they are stimulated with sounds that change slightly in pitch rather than stay the same. The sounds of motherese and lullabies are born from acoustic features that are the perfect forms of stimulation to ensure that a fetus’s experience-expectant brain will continue to develop normal auditory circuitry and perceptual skills that will help it survive after birth.
 
 
The auditory system is not the only part of the brain that benefits from sound stimulation. Research has shown that fetuses older than thirty weeks can distinguish different phonemes such as
ba
versus
bi
, suggesting that prenatal experience may be critical to the development of language areas.There is also evidence that auditory stimulation while still in the womb promotes the development of limbic structures such as the hippocampus and the amygdala that support memory and emotional development. Indeed, it is now clear that the sounds a fetus hears in its third trimester can be remembered years later and even influence behavior as late as two years after birth. One researcher, for example, found that infants whose mothers watched a particular soap opera during pregnancy were calmed when they heard the show’s theme song, whereas babies whose mothers did not watch the show had no reaction to the song.
Now that Kai is finally born, he is awash in a sea of acoustic information, but not all of these sounds are novel. He is certainly familiar with his mother’s voice and to a lesser extent my own. Many of the sounds that Melissa experienced in her final trimester were likely heard by Kai, and although most were not repeated enough to consolidate into long-term memories, they undoubtedly had a significant impact on his auditory development thus far. Kai, like all primates, will continue to need auditory stimulation for decades to come. Normal development of auditory circuitry continues well into the late teens, resulting in steady improvement in many functions such as pitch discrimination and sound localization. Mammals that are denied this stimulation suffer from a range of abnormalities. For example, rats that are raised in an acoustic environment with a restricted frequency range are unable to hear outside this range as adults.This impacts their ability to discriminate sounds that have pitch variation that overlaps with this frequency range. Deprivation also disrupts their ability to localize sounds—an impairment that could prove costly if approached by a predator.
The fact that all primates have auditory perceptual skills that are facilitated by diatonic scale structure, while not true for all mammals, gives us a rough idea of when our faculty for music may have emerged in our evolutionary lineage. Some Old World primates may have evolved auditory circuitry that had improved function relative to competing primates—such as increased pitch discrimination and sound localization—that gave them a distinct survival advantage. As we’ve seen with modern experimental studies, the successful development of this circuitry depended on the organism experiencing certain forms of auditory stimulation. Clearly, not all primate species have satisfied this demand in the same way. In hominids, natural selection has forged this adaptation by linking these optimal forms of auditory stimulation to the activation of evolutionarily ancient pleasure circuits that are seen in all mammals. These circuits were most likely an earlier adaptation that fostered reproduction. Natural selection produces incremental change in structure and function that is always built on top of earlier adaptations. Structures are co-opted from others not in a design sense, but through a process that unevenly results in the survival of some genes over others. Hominids’ new fondness for wide swings in pitch variation and loudness in combination with exaggerated intonation may have created the initial conditions that ultimately led to the evolution of musicality and motherese in our species.These human technologies, in turn, became very effective tools in promoting brain development. In the next chapter, we will find that a similar story has occurred for vision.

Other books

Bending Over Backwards by Cari Simmons
Three Wishes: Cairo by Klinedinst, Jeff
Blood and Ice by Robert Masello


readsbookonline.com Copyright 2016 - 2024