←back to thread

669 points georgemandis | 1 comments | | HN request time: 0.215s | source
Show context
rob ◴[] No.44379019[source]
For anybody trying to do this in bulk, instead of using OpenAI's whisper via their API, you can also use Groq [0] which is much cheaper:

[0] https://groq.com/pricing/

Groq is ~$0.02/hr with distil-large-v3, or ~$0.04/hr with whisper-large-v3-turbo. I believe OpenAI comes out to like ~$0.36/hr.

We do this internally with our tool that automatically transcribes local government council meetings right when they get uploaded to YouTube. It uses Groq by default, but I also added support for Replicate and Deepgram as backups because sometimes Groq errors out.

replies(5): >>44379183 #>>44380152 #>>44380182 #>>44381963 #>>44384523 #
colechristensen ◴[] No.44380152[source]
If you have a recent macbook you can run the same whisper model locally for free. People are really sleeping on how cheap the compute you own hardware for already is.
replies(2): >>44380229 #>>44384418 #
rob ◴[] No.44380229[source]
I don't. I have a MacBook Pro from 2019 with an Intel chip and 16 GB of memory. Pretty sure when I tried the large whisper model it took like 30 minutes to an hour to do something that took hardly any time via Groq. It's been a while though so maybe my times are off.
replies(2): >>44380449 #>>44380467 #
1. colechristensen ◴[] No.44380449[source]
Ah, no, Apple silicon Mac required with a decent amount of memory. But this kind of machine has been very common (a mid to high range recent macbook) at all of my employers for a long time.