←back to thread

448 points lastdong | 1 comments | | HN request time: 0s | source
Show context
TheAceOfHearts ◴[] No.45115690[source]
Unfortunately it's not usable if you're GPU-poor. Couldn't figure out how to run this with an old 1080. I tried VibeVoice-1.5B on my old CPU with torch.float32 and it took 832 seconds to generate a 66 second audio clip. Switching from torch.bfloat16 also introduced some weird sound artifacts in the audio output. If you're GPU-poor the best TTS model I've tried so far is Kokoro.

Someone else mentioned in this thread that you cannot add annotations to the text to control the output. I think for these models to really level up there will have to be an intermediate step that takes your regular text as input and it generates an annotated output, which can be passed to the TTS model. That would give users way more control over the final output, since they would be able to inspect and tweak any details instead of expecting the model to get everything correctly in a single pass.

replies(1): >>45115995 #
tempodox ◴[] No.45115995[source]
This is ludicrous. macOS has had text-to-speech for ages with acceptable quality, and they never needed energy- and compute-expensive models for it. And it reacts instantly, not after ridiculous delays. I cannot believe this hype about “AI”, it’s just too absurd.
replies(1): >>45116204 #
NitpickLawyer ◴[] No.45116204[source]
> with acceptable quality

Compared to IBMs Steven Hawking's chair, maybe. But apple tts is not acceptable quality in any modern understanding of SotA, IMO.

replies(1): >>45116623 #
selkin ◴[] No.45116623[source]
Different use cases:

If you need a not-visual output of text, SoyA is a waste of electrons.

If you want to try and mimic a human speaker, then it ain’t.

Question is why would you need to have the computer sound more human, except for “because I can”.

replies(3): >>45116733 #>>45117806 #>>45119308 #
1. Ukv ◴[] No.45117806{3}[source]
> Question is why would you need to have the computer sound more human

I think translation would be a big use - maybe translating your voice to another language while maintaining emotion and intonation, or dubbing content (videos, movies, podcasts, ...) that isn't otherwise available in your native language.

Traditional non-ML TTS for longer content like podcasts or audiobooks seems like it'd become grating to the point of being unlistenable, or at least a significantly worse experience. Stands to benefit from more natural sounding voices that can place emphasis in the right places.

Since Stephen Hawking was brought up, there are likely also people with voice-impairing illnesses who would like to speak in their own voice again (in addition to those who are fine with a robotic voice). Or alternatively, people who are uncomfortable with their natural voice and want to communicate closer to how they wish to be perceived.

Could also potentially be used for new forms of interactive media that aren't currently feasible - customised movies, audio dramas where the listener plays a role, videogame NPCs that react with more than just prerecorded lines, etc.