Banner Advance Home Navigation Bar Advance Home Issue Index Read past articles Weekly Calendar

February 14, 2005

Researchers Analyzing How
Human Brain Processes Sound

Deep inside your brain, near the brain stem in an area called the inferior colliculus, cells respond to sounds that you hear, processing the signals and letting you know what you are hearing.

The anatomy of the area has been well studied. Now UConn researchers, led by biomedical engineer Monty A. Escabí, are conducting basic research to relate the anatomy of the inferior colliculus to its function.

Escabí, an assistant professor of electrical and computer engineering, and Heather Read, an assistant research professor in the Department of Psychology, recently won a $1.3 million grant from the National Institutes of Health to learn more about how the brain processes and interprets “natural” – everyday – sound, with its complex mix of important messages and background noise.

“We’re trying to figure out how the brain works – the parts of the brain related to hearing,” says Escabí.

The researchers are examining how the brain processes sound signals during normal listening conditions, and how brain activity is affected if there is damage to the central nervous system.

“Speech is our number one form of communicating – it’s so important for humans,” Escabí says, “so understanding how we perceive it is important.”

The project carries into the realm of basic research a clinical study on which the two have been collaborating, also funded by the NIH. That research, led by Read, uses an animal model to learn how disabilities such as dyslexia are related to hearing.

Post-mortem studies on humans done at Harvard Medical School in the 1980’s found that people with dyslexia had lesions on their temporal lobes, the areas of the brain above the ears. Escabí and Read are collaborating with researchers at Harvard to look at developmental abnormalities that might be related to dyslexia.

By studying the primary auditory cortex of rats with lesions in the temporal lobe, and using normal rats as a control, Read and Escabí are trying to model how the brain responds to lesions. While it’s too early to draw firm conclusions, Escabí says it may be that animals with lesions are slower at catching the “on-off” information in sound signals, making it difficult for them to grasp a sound when it is fast.

Also involved in the research are J. Holly Fitch, an assistant research professor in the psychology department, and Joseph J. Lo Turco, an associate professor of physiology and neurobiology. Fitch is looking at how sound perception and behavior is affected in the rats with lesions and Lo Turco is studying how genetic manipulations affect the development of the brain.

In the new NIH grant project, Escabí and Read are studying the physiology of the inferior colliculus by recording electrical activity, using 16-channel electrodes to record brain activity from 16 sites simultaneously. They play a sound and record how the brain responds.

The response is different, depending on the sound. For one thing, brain activity is selective for frequency: some brain cells respond to low frequency, others to medium, and still others to high frequency only.

“This is a really interesting idea – that neurons respond only to certain frequencies,” Escabí says. The different responses can be charted, showing electrical voltage fluctuations. Ultimately, he says, the researchers will relate these fluctuations to perception.

Frequency is only one component of a sound, and Escabí says the research also will look at other properties, such as repetition rate and dynamic – soft to loud – range. His goal is to come up with a mathematical description of how a neuron responds to a sound.

While other studies have focused on how the brain processes synthetic computer-generated sounds – sound that is isolated from its background noise – Escabí is interested in how the brain processes natural signals, such as speech or running water.

“The brain has a way, we think, of picking out sounds,” he says, by amplifying selectively so that the more important signal is heard clearly. If researchers can understand how the brain does this, it might be possible someday to design an optimal hearing aid that is as selective as the brain. The problem with hearing aids now, Escabí notes, is that they amplify all sound, including the background noise.

Escabí’s interest in sound dates back to his days as a drummer in a high school rock band. He first explored acoustics, studying the physics of sound. At Columbia University, where he earned a master’s degree in electrical engineering, he shifted his interest toward hearing.

“Basically, there’s a ton of room for new scientific discovery in this area,” Escabí says.

He earned a Ph.D. in bioengineering in 2000 in a joint program at the University of California at Berkeley and UC San Francisco’s medical school, where he first worked with Read, then a postdoctoral fellow.

He came to UConn four years ago, the same time that the undergraduate biomedical engineering program began. The program is now the third largest major of the engineering school’s 12 undergraduate programs. This fall, biomedical engineering had 160 majors.

Escabí, who is in the program’s neural systems engineering specialty, likens his research on the brain to reverse engineering a computer. You look at it, he says, try to figure out what’s happening in it, then describe it.