←back to thread

425 points karimf | 3 comments | | HN request time: 0s | source
Show context
trollbridge ◴[] No.45655616[source]
An ongoing question I have is why effort wasn't put into tokenising speech (instead of transcribed words) and then making an LLM out of that. There are huge amounts of speech available to train on.
replies(5): >>45655692 #>>45655754 #>>45655792 #>>45655815 #>>45656008 #
benob ◴[] No.45655754[source]
Audio tokenization consumes at least 4x tokens versus text. So there is an efficiency problem to start with. Then is there enough audio data to train a LLM from scratch?
replies(3): >>45655785 #>>45656849 #>>45663245 #
542354234235 ◴[] No.45656849[source]
Don't we have tens of thousands of hours (hundreds of thousands?) of closed captioned tv shows and movies? How many hours of news broadcasts with transcripts do we have? Maybe I just don't understand what is needed, but it seems like we have a lot of data to work with.
replies(2): >>45656942 #>>45656992 #
1. cruffle_duffle ◴[] No.45656992[source]
Correct me if I’m wrong but you need more than just closed captions. You need precise timing too. I’d think you’d need the text to line up exactly with the audio so when the voice makes an “A” sound the text it aligns with is “A” as well.

So while having the closed captions saves some of the work, there is probably much more needed to get everything lined up.

But I’m absolutely not an expert at all. In fact this is the first I’ve ever even though about it!

replies(1): >>45657447 #
2. vvolhejn ◴[] No.45657447[source]
Author here. Speech-to-text is more or less solved, it's easy to automatically get captions including precise timestamps. For training Moshi, Kyutai's audio LLM, my colleagues used whisper-timestamped to transcribe 7 million hours of audio.

See Section 4.2 in the Moshi paper: https://arxiv.org/pdf/2410.00037

replies(1): >>45658167 #
3. cruffle_duffle ◴[] No.45658167[source]
Sweet!