Scientists at Carnegie Mellon University have discovered that our ears use the most efficient way to process the sounds we hear. These results represent a significant advance in understanding how sound is encoded for transmission to the brain.
The research provides a new mathematical framework for understanding sound processing and suggests that our hearing is highly optimized in terms of signal coding —the process by which sounds are translated into information by our brains— for the range of sounds we experience. The same work also has far-reaching, long-term technological implications, such as providing a predictive model to vastly improve signal processing for better-quality compressed digital audio files and designing brain-like codes for cochlear implants, which restore hearing to the deaf.
To achieve their results, the researchers took a radically different approach to analyzing how the brain processes sound signals. Abstracting from the neural code at the auditory nerve, they represented sound as a discrete set of time points, or a "spike code," in which acoustic components are represented only in terms of their temporal relationship with each other. That's because the intensity and basic frequency of a given feature are essentially "kernalized," or compressed mathematically, into a single spike. This is similar to a player piano roll that can reproduce any song by recording what note to press when the spike code encodes any natural sound in terms of the precise timings of the elemental acoustic features. Remarkably, when the researchers derived the optimal set of features for natural sounds, they corresponded exactly to the patterns observed by neurophysiologists in the auditory nerves.
"We've found that timing of just a sparse number of spikes actually encodes the whole range of nature sounds, including components of speech such as vowels and consonants, and natural environment sounds like footsteps in a forest or a flowing stream," said Michael Lewicki, associate professor of computer science at Carnegie Mellon and a member of the Center for the Neural Basis of Cognition (CNBC). "We found that the optimal code for natural sounds is the same as that for speech. Oddly enough, cats share our own optimal auditory code for the English language."
"Our work is the only research to date that efficiently processes auditory code as kernalized spikes," said Evan Smith, a graduate student in psychology at the CNBC.
Until now, scientists and engineers have relied on Fourier transformations —initially discovered 200 years ago— to separate and reconstitute parameters like frequency and intensity as part of traditional sound signal processing.
Smith and Lewicki's approach dissects sound based only on the timing of compressed "spikes" associated with vowels (like cat vocalizations), consonants (like rocks hitting one another) and sibilants (ambient noise).
The authors' research combines computer science, psychology, neuroscience and mathematics. >from *Carnegie Mellon Scientists Show How Brain Processes Sound*. Landmark Results Could Improve Devices from iPods to Cochlear Implants. February 23, 2006
> sound-analysis breakthrough. extremely high-resolution time-frequency analysis. july 26, 2006
> brain frequency map. researchers map out numerous areas in the brain where sound frequencies are processed. june 22, 2006
> sound of silence activates auditory cortex. auditory imagery is the subjective experience of hearing in the absence of auditory stimulation. 2005
> sonification. data sonification is becoming one of the most promising analysis tools, since sounds can summarize significant amounts of information and can be characterized, stored and studied in a simpler and easier way with respect to other data representations. november 25, 2005
> how we hear. discovered how tiny cells in the inner ear change sound into an electrical signal the brain can understand. may 7, 2002
> auditory-protective follie