←back to thread

How the cochlea computes (2024)

(www.dissonances.blog)
475 points izhak | 1 comments | | HN request time: 0.33s | source
Show context
edbaskerville ◴[] No.45762928[source]
To summarize: the ear does not do a Fourier transform, but it does do a time-localized frequency-domain transform akin to wavelets (specifically, intermediate between wavelet and Gabor transforms). It does this because the sounds processed by the ear are often localized in time.

The article also describes a theory that human speech evolved to occupy an unoccupied space in frequency vs. envelope duration space. It makes no explicit connection between that fact and the type of transform the ear does—but one would suspect that the specific characteristics of the human cochlea might be tuned to human speech while still being able to process environmental and animal sounds sufficiently well.

A more complicated hypothesis off the top of my head: the location of human speech in frequency/envelope is a tradeoff between (1) occupying an unfilled niche in sound space; (2) optimal information density taking brain processing speed into account; and (3) evolutionary constraints on physiology of sound production and hearing.

replies(12): >>45763026 #>>45763057 #>>45763066 #>>45763124 #>>45763139 #>>45763700 #>>45763804 #>>45764016 #>>45764339 #>>45764582 #>>45765101 #>>45765398 #
1. matthewdgreen ◴[] No.45763057[source]
If you take this thought process even farther, specific words and phonemes should occupy specific slices of the tradeoff space. Across all languages and cultures, an immediate warning that a tiger is about to jump on you should sit in a different place than a mother comforting a baby (which, of course, it does.) Maybe that even filters down to ordinary conversational speech.