←back to thread

695 points georgemandis | 1 comments | | HN request time: 0.323s | source
Show context
PeterStuer ◴[] No.44385188[source]
I wonder how much time and battery transcoding/uploading/downloading over coffeeshop wifi would realy save vs just running it locally through optimized Whisper.
replies(1): >>44388301 #
1. georgemandis ◴[] No.44388301[source]
I had this same thought and won't pretend my fear was rational, haha.

One thing that I thought was fairly clear in my write-up but feels a little lost in the comments: I didn't just try this with whisper. I tried it with their newer gpt-4o-transcription model, which seems considerably faster. There's no way to run that one locally.