Washington State Magazine

Spring 2003

Spring 2003

In This Issue...


Philip & Neva Abelson: Pioneers on the knowledge frontier :: Philip Abelson '33 developed the process, adopted by the Manhattan Project, for separating U-235 from U-238. He went on to make significant contributions to biochemistry, chemistry, engineering physics, and other fields. Neva Abelson '34 developed the test for the Rh factor in newborns. What was once Science Hall now carries their name. by Pat Caraher

Between humor and menace: The art of Gaylen Hansen :: Gaylen Hansen paints his alter ego as he confronts giant grasshoppers and a buffalo lurking behind the bed. by Sheri Boggs

Resilient Cultures—A new understanding of the New World :: The history of European and Indian interactions is being dramatically rewritten. In a new book, a WSU historian produces an update. by John Kicza

Whirlwind tour :: On an August morning, Senator Murray '72 visits Dayton to hear its concerns. by Treva Lind

Homage to a difficult land: An African scientist returns home :: Beset by a relentless drought, the Sahel seems in unstoppable ecological decline. But Oumar Badini will not give up. There must be some way to help Mali farmers reclaim the land. Story and photos by Peter Chilson

Field Notes

Halloween in Iraq :: A traveler explores rumors of genuine "evildoers." by Nathan Mauger




Cover: A young fan gets his autograph from quarterback Jason Gesser. Read story. Photo by Shelly Hanks.

Christine Portfors. For more about her research--and about bats--check her Web site, www.vancouver.wsu.edu/fac/portfors/portfors_home.html. Mark Schriver

Christine Portfors. For more about her research—and about bats—check her Web site. Mark Schriver

How do we perceive sound?

by | © Washington State University

Christine Portfors, a neurologist, tends a lair of 23 tropical moustache bats at WSU Vancouver in order to tease apart the question of how they distinguish between sounds-for example, between those they use for echolocation and those they use to communicate.

Bat communication sounds, like speech sounds, are very complex in terms of frequency and timing, says Portfors. Beyond that, "We don't know anything about how the brain actually processes those types of sounds."

Earlier work by Portfors revealed that bats have neurons that are very sensitive to the timing of the echolocation sound, between when they emit it and when the echo comes back. Firing at different times, the neurons create a mental map that analyzes target distance information. Other neurons are so sensitive that bats can pick out a particular species of moth based on the amplitude modulation of the echolocation signal.

Bat talk. 

Portfors is currently focusing on the sounds bats use to communicate with each other. How their brains process communication sounds is apparently very similar to how humans process speech. Neural strategies seem to follow a common pattern among mammals.

Portfors is conducting experiments to determine what these communication sounds actually mean. How, for example, does a mother bat distinguish between her pup's call and that of another?

Our understanding of how the auditory system does this is poor, says Portfors.

This current focus reflects Portfors's interest in behavior, an unusual sympathy for a neuroscientist. However, the ultimate question piquing her curiosity is neurological.

When you hear a sound, its frequencies are processed in your ear by the cochlea , the spiral-shaped cavity in the inner ear that contains the nerve endings necessary for hearing. From there, the sound is split into different frequency components. Like a piano, says Portfors, high frequencies on one side, low on the other.

Conventional scientific wisdom has it that the individual frequency components stay within this sequential process, running individually through the auditory system. An initial neuron that responds to a high frequency will project the signal to another neuron higher up the auditory system that responds to the same frequency. But at what point, asks Portfors, does the brain put the signals together? At what point, and how, does that complex mixture of frequency modulation and timing become a sound in the brain?

Portfors has shown that this integration occurs at a lower evolutionary level of the brain than was hitherto thought. It was previously believed that it takes place at a very high level of the cortex. Portfors has shown that it actually occurs somewhere in the more primitive midbrain.

 Recognizing a voice.

Besides filling some big gaps in our knowledge about how the auditory system works and suggesting some very tantalizing evolutionary implications, Portfors's work also has practical applications. She is part of a scientific advisory board for a company that is developing software for voice identification.

"Basically, we're modeling what we know about the auditory system," she says.

Her work on this project, which is directly related to her basic research, concerns how we group the different components of sounds together. Even the best computerized voice recognition systems have great difficulty interpreting more than one voice, which is a struggle in itself. A human voice is unique, composed of a number of components working together. It may contain components identical to that of another voice, but the combination makes it distinct. Software that could isolate and analyze these components would greatly improve voice recognition systems. By drawing on the work of research scientists, says Portfors, the company she works with is trying to reverse-engineer the brain.

Categories: Biological sciences | Tags: Bats, Sound, Neuroscience

Comments are temporarily unavailable while we perform some maintenance to reduce spam messages. If you have comments about this article, please send them to us by email: wsm@wsu.edu