←back to thread

257 points amrrs | 4 comments | | HN request time: 0.667s | source
Show context
Mizza ◴[] No.41842147[source]
What's SOTA for open source or on-device right now?

I tried building a babelfish with o1, but the transcription in languages other than English are useless. When it gets it correct, the translations are pretty perfect and the voice responses are super fast, but without good transcription it's kind of useless. So close!

replies(5): >>41842153 #>>41842200 #>>41842281 #>>41843179 #>>41846783 #
diggan ◴[] No.41842281[source]
I was literally just looking at that today, and the best one I came across was F5-TTS: https://swivid.github.io/F5-TTS/

Only thing missing (for me) is "emotion tokens" instead of forcing the entire generation to be with a specific emotion, as the generated voice is a bit too robotic otherwise.

replies(1): >>41842581 #
1. moffkalast ◴[] No.41842581[source]
> based on flow matching with Diffusion Transformer

Yeah that's not gonna be realtime. It's really odd that we currently have two options, ViTS/Piper that runs at a ludicrous speed on a CPU and is kinda ok, and these slightly more natural versions a la StyleTTS2 that take 2 minutes to generate a sentence with CUDA acceleration.

Like, is there a middle ground? Maybe inverting one of the smaller whispers or something.

replies(2): >>41842748 #>>41842974 #
2. modeless ◴[] No.41842748[source]
StyleTTS2 is faster than realtime
replies(1): >>41847018 #
3. gunalx ◴[] No.41842974[source]
Bark?
4. moffkalast ◴[] No.41847018[source]
To be clear, what I mean by realtime is full gen under at most 200ms so it can be sent to the sound card and start playing, not generating under the amount of time it would take to play it, which would add that as an unusably long delay in practice.

I suppose it might be possible to do it with streaming very short segments, but I haven't seen any implementation with it that would allow for that, and with diffusion based models it doesn't even work conceptually either.