That's why residual vector quantization is a useful technique - using multiple dictionaries to quantize a single timeslice, each conditioned on the previous residual level. You can also quantize a signal at different frequencies.
There are samples towards the end of the post of their LLM trained on their Mimi audio codec.
I read the article and confess some of the modeling parts were above my comprehension. But I would like to add that as an audio engineer, the "key question" you describe is solved, just not applied to transformer models (?).
An experienced engineer can look at a waveform in a DAW and identify specific consonants, vowels, specific words, etc quite fluently. And with tools like Melodyne - which already quantize audio semantically - they can identify (and manipulate) pitch and formants as well, turning an O vowel into an E vowel, or changing the inflection of a phrase (up-speak vs down-speak, for example).
I don't know how to apply this to a neural codec, but it seems like it shouldn't be that hard (that's my naivete coming through)
As an experienced DAW author, I very, very much doubt this.
What can be done relatively easy is to "see" or rather "follow along" in the waveform when listening to the audio. But I read your claim as being that someone could look at the waveform (which is already decimated from the original) and identify words or phonemes without hearing the associated audio. I am extremely skeptical that there is anyone anywhere in the world who can do this.
Did Claude Shannon not answer this question in 1948? You need at least 1 bit per 6dB of dynamic range for each symbol and 2B symbols per second where B is the bandwidth of the signal.
Compression techniques are all about getting below that fundamental limit but it's not like this is an unsolved problem. Or is 1kbaud too much for LLMs?
DAWs' rendered waveforms have so little information that such identification is likely impossible even in theory. Telling apart plosives and vowels maybe, but not much more than that.
I work with phoneticians and they can (sometimes) read even words from suitably scaled spectrograms, but that's a lot more information than in waveforms.
I feel like there should be a model that can do much of this for me but I haven't really looked into it, ironically due to laziness, but also because I edit across multiple tracks at this stage, and I'm afraid to feed the model an already mixed stereo track. I'm curious why you still do it manually, if you still do and if you've looked into alternatives.
Hopefully using Ardour's "Ripple - Interview" mode :))
https://openai.com/index/whisper/
Such approach dates back to 1940s, when people were trained to read the speech from spectrograms. There is a 1947 book "Visible Speech" by Potter, Kopp, and Green describing these experiments. Here is a more slightly recent 1988 review of the subject: "Formalizing Knowledge Used in Spectrogram Reading"
The blog post addresses this directly with samples from their own baseline (an autoregressive mu-law vocoder), and from WaveNet (which was similar architecture). The sound is mostly recognizable as a human voice, but it's unintelligible. The sequence length is too long and the SNR for the encoding scheme is too low for an generative/autoregressive model to learn.
This is what the neural codec is intended to address. Decoupling semantic from acoustic modelling is an important step ("how our ears interpret a sound" vs. "what we need to reconstruct the exact acoustic signal"). Mimi works at 1.1kbps, and others work at low bitrates (descript, semanticodec, etc). Encodec runs at at a higher bitrate so generally delivers better audio quality.
Now - why are neural codecs easier to model than conventional parametric codecs? I don't know. Maybe they're not, maybe it's just an artifact of the transformer architecture (since semantic tokens are generally extracted from self-supervised models like WavLM). It's definitely an interesting question.