←back to thread

How the cochlea computes (2024)

(www.dissonances.blog)
475 points izhak | 2 comments | | HN request time: 0s | source
Show context
p0w3n3d ◴[] No.45762510[source]
Tbh I used to think that it does. For example, when playing higher notes, it's harder to hear the out-of-tune frequencies than on the lower notes.
replies(2): >>45762672 #>>45765250 #
fallingfrog ◴[] No.45762672[source]
I haven't noticed that effect, to be honest. Actually I think its the really low bass frequencies that are harder to tune- especially if you remove the harmonics and just leave the fundamental.

Are you perhaps experiencing some high frequency hearing loss?

replies(1): >>45762737 #
jacquesm ◴[] No.45762737[source]
It's even more complex than that. The low notes are hard to tune because the fundamentals are very close to each other and you need to have super good hearing to match the beats, fortunately they sound for a long time so that helps. Missing fundamentals are a funny thing too, you might not be 'hearing' what you think you hear at all! The high notes are hard to tune because they sound very briefly (definitely on a piano) and even the slightest movement of the pin will change the pitch considerably.

In the middle range (say, A2 through A6) neither of these issues apply, so it is - by far - the easiest to tune.

replies(3): >>45763087 #>>45763456 #>>45765123 #
1. TheOtherHobbes ◴[] No.45763087{3}[source]
See also, psychoacoustics. The ear doesn't just do frequency decomposition. It's not clear if it even does frequency decomposition. What actually happens is lot of perceptual modelling and relative amplitude masking which makes it possible to do real-time source separation.

Which is why we can hear individual instruments in a mix.

And this ability to separate sources can be trained. Just as pitch perception can be trained, with varying results from increased acuity up to full perfect pitch.

A component near the bottom of all that is range-based perception of consonance and dissonance, based on the relationships between beat frequencies and fundamentals.

Instead of a vanilla Fourier transform, frequencies are divided into multiple critical bands (q.v.) with different properties and effects.

What's interesting is that the critical bands seem to be dynamic, so they can be tuned to some extent depending on what's being heard.

Most audio theory has a vanilla EE take on all of this, with concepts like SNR, dynamic range, and frequency resolution.

But the experience of audio is hugely more complex. The brain-ear system is an intelligent system which actively classifies, models, and predicts sounds, speech, and music as they're being heard, at various perceptual levels, all in real time.

replies(1): >>45763131 #
2. jacquesm ◴[] No.45763131[source]
Yes, indeed, to think about the ear as the thing that hears is already a huge error. The ear is - at best - a faulty transducer with its own unique way of turning air pressure variations into nerve impulses and what the brain does with those impulses is as much a part of hearing as the mechanics of the ear, just like a computer keyboard does not interpret your keystrokes, it just turns them into electrical signals.