Read I Can Hear You Whisper Online

Authors: Lydia Denworth

I Can Hear You Whisper (3 page)

Intellectually, I was interested in true American Sign Language and appreciated that ASL was a fully fledged language that could open a door to a whole new world. Over the years, my appreciation has only grown. But in my early foray into the deaf world, two things gave me pause. As I read, I discovered an aspect of the deaf community that no one is proud of: a long, alarming history of educational underachievement. Even after nearly two hundred years of concentrated effort at educating the deaf in America, the results are indisputably poor. The mean reading level of deaf adults is third or fourth grade. Between the ages of eight and eighteen, deaf and hard-of-hearing students tend to gain only one and a half years of literacy skills. Education and employment statistics are improving, but deaf and hard-of-hearing students remain more likely to drop out of high school than hearing students and less likely to graduate from college. Their earning capacity is, on average, well below that of their hearing peers. Why? Was it the fault of deaf education or of deafness itself? I did know that reading in English meant knowing English.

Equally disturbing was the depth of the divide I perceived between the different factions in the deaf and hard-of-hearing community, which mostly split over spoken versus visual language. Although there seemed to be a history of disagreement, the harshest words and most bitter battles had come in the 1990s with the advent of the cochlear implant. The device sounded momentous and amazing to me, and that was a common reaction for a hearing person. It's human nature to gravitate to ideas that support one's view of the world, and hearing people have a hard time imagining that the deaf wouldn't rather hear. As Steve Parton, the father of one of the first children implanted in the United States once put it, the fact that technology had been invented that could help them do just that seemed “
a miracle of biblical proportions.”

By the time I was thinking about this, early in 2005, the worst of the enmity had cooled. Nonetheless, clicking around the Internet and reading books and articles, I felt as if I'd entered a city under ceasefire, where the inhabitants had put down their weapons but the unease was still palpable.
A few years earlier, the National Association of the Deaf, for instance, had adjusted its official position on cochlear implants to very qualified support of the device as one choice among many. It wasn't hard, however, to find the earlier version, in which they “deplored” the decision of hearing parents to implant their children. In other reports about the controversy, I found cochlear implantation of children described as “
genocide” and “child abuse.”

No doubt those quotes had made it into the press coverage precisely because they were extreme and, therefore, attention-getting. But child abuse?! Me? What charged waters were we wading into? I just wanted to help my son. It would be some time before I could fathom what lay behind the objections. Like a mother adopting a child from another race, I realized my son might have a cultural identity—should he choose to embrace it—that I could come to appreciate but could never truly share. Yet I felt strongly that our family had a claim on him, too. First and foremost, he belonged to our culture.

We might have to take sides.

3
H
OW
L
OUD
I
S A
W
HISPER?

A
lex stood with his nose pressed against the arched plate-glass window watching the traffic go by on Second Avenue. Mark and I stood with him, pointing out the yellow taxis, the size of the trucks, the noise of the horns. Talking to your child is a hard habit to break, even if you know you can't be heard. On this day, we were hoping for clarity. Instead of doing another hearing test in the doctor's office, we'd been sent to the New York Eye and Ear Infirmary, where there is a battalion of audiologists on hand at all times. The door into the waiting room opened and a young woman with a broad, friendly face and long brown hair stood on the threshold. She glanced at the file in her hand and then looked up.

“Alexander? Alexander Justh?”

From the moment she called his name that first time early in February, Jessica O'Gara told me later, she was evaluating Alex. “It starts in the waiting room.” Like detectives, audiologists use every possible clue to figure out what's going on with a child. Is he engaged with his parents? Did he turn his head when his name was called? What toy is he playing with? Is he holding a book upside down? Can he turn the pages? Could there be a cognitive delay? Can this child hear?

Jessica and her colleague Tracey Vytlacil ushered us down the hall. Alex was shy and, of course, quiet, but Jessica's energy was infectious.

“Hey, buddy, we need to figure out what's going on with you,” she said, “but first I think I have something you will like.”

She pulled out a box of stickers, the mainstay of medical offices everywhere, and let Alex pick out a handful. While we went through his history, he occupied himself putting stickers on his shirt and arm and then on my shirt and arm. Like every audiologist and doctor we would see in the years to come, Jessica had a plastic model of the ear on display and some wall charts to boot. Later, these cross-sectioned models became one of Alex's favorite playthings during waits for appointments.

 • • • 

Unlike the eyeball, which is bigger inside the skull than what we see in the face, the many parts of the ear get smaller and more intricate under the surface. In the plastic models, the outer ear, the part that's visible, reminds me of Dumbo's enormous ear compared to the tangle of circles and snaking tubes that make up the middle and inner ears. In her marvelous A
Natural History of the Senses
, Diane Ackerman likened the inner workings of the ear to a “
maniacal miniature golf course, with curlicues, branches, roundabouts, relays, levers, hydraulics, and feedback loops.” The design may not be streamlined, but it is effective, transforming sound waves into electrical signals the brain can understand. It is also particularly well suited to the human voice;
our keenest hearing is usually in the range required to hear speech. That makes evolutionary sense.
Prehistoric ears had to contend primarily with human and animal noise and the occasional clap of thunder. Modern noise dates only to the Industrial Revolution and the invention of gunpowder.

When I call Alex's name, I'm pushing air out of my throat and making air molecules vibrate. They bump into the air molecules next to them—how fast and how hard depends on whether I've whispered, yelled, or spoken in a conversational voice. Either way, I've created a sound wave, a form of energy that can move through air, water, metal, or wood, carrying detailed information.

Traveling through the air, the sound waves are acoustic energy. The outer ear is designed to catch that energy in the folds of the earlobe (the pinna). It does a slightly better job with sounds coming from the front. Cats and deer and some other animals have the ability to turn their pinna to a sound like radar dishes, but humans must turn their heads. Once collected, the waves are funneled into the ear canal, which acts as a resonance chamber.

At the eardrum (tympanum), the waves have reached the middle ear. When they hit the eardrum, it vibrates. Those vibrations in the membrane of the eardrum are carried across the little pocket of the middle ear by a set of tiny bones—the smallest in the body—called the ear ossicles, but more commonly known as the hammer (malleus), anvil (incus), and stirrup (stapes). Converting the original acoustic energy to mechanical energy, the hammer hits the anvil, the anvil hits the stirrup, and the stirrup, piston-like, hits a membrane-covered opening called the oval window, which marks a new boundary.

On the far side lies the fluid-filled cochlea, the nautilus-like heart of the inner ear. The vibrations transmitted from the stapes through the oval window send pressure waves through the cochlear fluid; mechanical energy has become hydro energy. Outside, the cochlea is protected by hard, bony walls. Inside, the basilar membrane runs along its length like a ribbon. Thin as cellophane, the basilar membrane is stiff and narrow at one end, broad and flexible at the other. As sound waves wash through, the basilar membrane acts as a frequency analyzer. Higher-pitched sounds, like hissing, excite the stretch of membrane closest to the oval window; lower pitches, like rumbling, stimulate the farther reaches. Like inhabitants of a long curving residential street, specific sounds always come home to the same location, a particular 1.3 millimeters of membrane and the thirteen hundred neurons that live there, representing a “critical band” of frequencies.

Sitting on top of the basilar membrane is the romantically named organ of Corti. Known as the seat of hearing, it holds thousands of hair cells. Quite recently, scientists discovered a distinction between the functions of inner and outer cells. Twelve thousand outer cells, organized in three neat rows, amplify weak sounds and sharpen up tuning. Another four thousand inner hair cells, in one row, take on the work of sending signals to the auditory nerve fibers. Like microscopic glow sticks that light up when you snap them, the tiny stereocilia on each hair cell bend under the pressure of the movement of fluid caused by the sound wave and trigger an electrical impulse that travels up the nerve to the brain.

 • • • 

The most obvious way to assess hearing is to test how loud a sound has to be to be audible—its threshold. Decibels, a logarithmic scale that compares sound intensity levels, were invented in Bell Laboratories, the source of most things sound-related into the mid-twentieth century. Named for Alexander Graham Bell, decibels (dB) provide a means of measuring sound relative to human hearing. Zero decibels doesn't mean that no sound is occurring, only that most people can't hear it. With normal hearing, a person can distinguish everything from the rustling of leaves in a slight breeze (ten decibels) to a jet engine taking off (130 dB). The leaves will be barely noticeable, the airplane intolerable and damage-inducing. Urban street noise registers about eighty decibels; a bedroom at night is closer to thirty. A baby crying can reach a surprising 110 dB, a harmful level with prolonged exposure. (Thank goodness they grow up.) A whisper hovers around thirty decibels.

The number of cycles in each sound wave, whether it rises and falls in tight, narrow bands or loose, languid swells, determines its frequency, which we hear as pitch and measure in hertz (Hz). The range of normal hearing in humans is officially about 20–20,000 Hz. Just how astonishing that range is becomes clearer when you compare it to what we can do with our vision. The visible light spectrum encompasses one octave (a doubling of frequency, so violet light has roughly two times the frequency of red), resulting in 128 noticeably different shades of color (these are literally measured in a unit called “just noticeable difference” or JND), although in fact there are far more variations than the eye can see. Hearing, on the other hand, encompasses nearly ten octaves with five thousand just noticeable differences. What sounds too good to be true usually is. Only small children really hear that much. Hearing sensitivity declines with age, and what most adults hear spans about half the possible range, 50–10,000 Hz.

 • • • 

Somewhere along the way, some part of the sequence of hearing was not working for Alex. Jessica and Tracey were trying to figure out what. Having two audiologists was going to make a difference in the reliability of the results, because testing children takes as much art as science. Kids get cranky and tired. They don't pay attention. If they're very young, they can't tell you what they hear. The solution is for one audiologist to run the test and the other to play with the child. Both watch like hawks for signs of response and compare notes. Although there are more direct physiological tests of hearing, behavioral testing is still essential because it's a measure of the results of the entire system. It's like the difference between seeing if a dishwasher turns on and seeing if the dishes come out clean.

A quick immittance test—a puff of air through the canal—showed that Alex's right ear was still full of fluid. No air, and therefore no sound, was flowing through. His left ear, however, was not fully blocked. By this point, the fluid alone was not enough of an explanation. “Fluid doesn't make them not talk,” Jessica explained. “They might be slower or like louder noises,” but children won't fail to talk entirely.

We moved on to testing in the booth. With Tracey running the audiometer and Jessica enticing Alex with stuffed caterpillars eating toy fruit and toppling block towers, they worked their way through loud and soft sounds of every frequency. Alex had to feed the caterpillar or add a block every time he heard a sound.

Watching the test was an anxious experience, like watching a child readying to catch a pop fly when the game hangs in the balance, or willing her to perform a difficult piece at a crowded recital. Only more so. You observe and your nerves jangle.

As Karen had explained to me, hearing tests create an audiogram, a graphic representation of what a person can hear, by drawing a line (two, actually, one for each ear) along the threshold of the softest detectable sound. The x-axis charts low to high frequency from left to right. Decibels run down the y-axis, getting louder from top to bottom. (Some audiograms reverse this and put the louder sounds at the top.) There's a complication, though. Sound is deceptive. When we listen, we tend to think we hear just one pitch, but there's
more going on than that. Daniel Levitin, in
This Is Your Brain on Music
, uses the analogy of Earth spinning on its axis, traveling around the sun, and moving along with the entire galaxy—all at the same time. When we call a note on the piano middle C, we have identified the lowest recognizable frequency, known as the fundamental frequency, but there are mathematically related frequencies called harmonics or partials sounding above that at the same time. Our brains respond to the group as a whole, and the note sounds coherent to us. Audiologists must get rid of the harmonics and partials to be sure a person is truly hearing at a particular frequency. They do that by creating either pure tones (one frequency) or narrow-band tones, which do exactly what they say—excite only a narrow band of the basilar membrane.

It's not just sounds like leaves and airplanes that vary in frequency. The words we say, even the different vowels and consonants in each word, consist of waves of different frequencies, generally between 500 and 3,000 Hz. The “t” sounds in “tugboat” for instance contain more high-frequency energy than the “b” and the “g,” for which most of the energy is concentrated in lower frequencies. In a normal hearing ear, those frequencies correspond to particular points along the basilar membrane from low to high in a system that works like a piano keyboard. When low-frequency sounds excite hair cells at the far end of the membrane, the brain gets the message to recognize not just a jazz riff played in the deep tones of a stand-up bass but also the sound “mm.” The sound of the letter “f,” on the other hand, has the same effect as the top notes on the piano: It stimulates a spot at the end of the membrane nearest the oval window, where the high frequencies are found.

A portion of the top half of the audiogram is known as the speech banana. It's an inverted arc, roughly what you'd get if you placed a banana on its back on the sixty-decibel line, and is typically shown as a shaded crescent. To be able to hear normal speech, a person needs to be able to hear at the frequencies and decibels covered by the banana. An average person engaged in conversation, about four feet away, will have an overall level of sixty decibels. The level falls by six decibels for every doubling of the distance, and it rises by six decibels for every halving of the distance, which is why it's harder to hear someone who is farther away.

By the end of that day, we knew Alex had an underlying hearing loss, but there was still the complication of the fluid. Over the next two weeks, in quick succession, Alex had tubes put in surgically by our ear, nose, and throat specialist, Dr. Jay Dolitsky, to clear remaining fluid. (“It was like jelly,” he told us.) Alex had a bone conduction test, which I now understood measures what you hear through your bones. Every time you hum or click your teeth, you hear the resulting sound almost entirely through your bones. When you speak or sing, you hear yourself in two ways: through air conduction and bone conduction. The recorded sound of your voice sounds unnatural to you because only airborne sound is picked up by the microphone and you are used to hearing both. Finally, Alex had an auditory brain stem response (ABR) test, under sedation, which allowed Jessica to measure his brain's responses to a range of frequencies and intensities and pinpoint his level of loss.

When it was all over, we knew that, in medical terms, Alex had moderate to profound sensorineural hearing loss in both ears. That probably meant that his hair cells were damaged or nonexistent and not sending enough information to the auditory nerve. In someone who is profoundly deaf, who can hear only sounds louder than ninety decibels, almost no sound gets through. In a moderate (40 to 70 dB) or severe (70 to 90 dB) hearing loss, that all-important basilar membrane still functions but not nearly as well. Like a blurry focus on a camera, it can no longer tune frequencies as sharply. The line on Alex's audiogram started out fairly flat in the middle of the chart and then sloped down from left to right like the sand dropping out from under your feet as you wade into the ocean. He could hear at fifty decibels (a flowing stream) in the lower frequencies, but his hearing was worse in the high frequencies, dropping down to ninety decibels. He could make out some conversation, but my whispered “I love yous,” at thirty decibels or less, would have been inaudible.

Other books

Exhale by Kendall Grey
Long Time Gone by J. A. Jance
The Soldier's Tale by Jonathan Moeller
Battleground Mars by Schneider, Eric
Exile's Song by Marion Zimmer Bradley
Tell My Sorrows to the Stones by Christopher Golden, Christopher Golden


readsbookonline.com Copyright 2016 - 2024