←back to thread

703 points georgemandis | 1 comments | | HN request time: 0.203s | source
Show context
w-m ◴[] No.44378345[source]
With transcribing a talk by Andrej, you already picked the most challenging case possible, speed-wise. His natural talking speed is already >=1.5x that of a normal human. One of the people you absolutely have to set your YouTube speed back down to 1x when listening to follow what's going on.

In the idea of making more of an OpenAI minute, don't send it any silence.

E.g.

    ffmpeg -i video-audio.m4a \
      -af "silenceremove=start_periods=1:start_duration=0:start_threshold=-50dB:\
                         stop_periods=-1:stop_duration=0.02:stop_threshold=-50dB,\
                         apad=pad_dur=0.02" \
      -c:a aac -b:a 128k output_minpause.m4a -y
will cut the talk down from 39m31s to 31m34s, by replacing any silence (with a -50dB threshold) longer than 20ms by a 20ms pause. And to keep with the spirit of your post, I measured only that the input file got shorter, I didn't look at all at the quality of the transcription by feeding it the shorter version.
replies(12): >>44378492 #>>44378769 #>>44378939 #>>44378971 #>>44380884 #>>44380906 #>>44381352 #>>44382788 #>>44382864 #>>44384720 #>>44388923 #>>44388970 #
1. vayup ◴[] No.44388923[source]
Gemini charges by tokens rather than minutes. I used VAD to trim silence hoping token count will go down. I noticed the token count wasn't much different (Eg: 30 seconds of background noise had the same count as 2s of background noise). Either Gemini API trims silence under the hood, or the nature of tokenization is dependent on speech content rather than the length. Not sure which.

In either case, I bet OpenAI is doing the same optimization under the hood and keeping the savings for themselves.