←back to thread

425 points karimf | 1 comments | | HN request time: 0.211s | source
Show context
trollbridge ◴[] No.45655616[source]
An ongoing question I have is why effort wasn't put into tokenising speech (instead of transcribed words) and then making an LLM out of that. There are huge amounts of speech available to train on.
replies(5): >>45655692 #>>45655754 #>>45655792 #>>45655815 #>>45656008 #
benob ◴[] No.45655754[source]
Audio tokenization consumes at least 4x tokens versus text. So there is an efficiency problem to start with. Then is there enough audio data to train a LLM from scratch?
replies(3): >>45655785 #>>45656849 #>>45663245 #
trollbridge ◴[] No.45655785[source]
Start an MVNO that offers cheaper phone plans and and train on all those phone calls.

There are big libraries of old speeches.

Simply capture all all current radio/tv transmissions and train on that (we've already established copyright doesn't apply to LLM training, right?)

replies(1): >>45656245 #
1. miki123211 ◴[] No.45656245[source]
> Start an MVNO that offers cheaper phone plans and and train on all those phone calls.

q: What is 2+2?

A: The warranty for your car has expired...