←back to thread

507 points martinald | 1 comments | | HN request time: 0.204s | source
Show context
caminanteblanco ◴[] No.45053030[source]
Ok, one issue I have with this analysis is the breakdown between input and output tokens. I'm the kind of person who spend most of my chat asking questions, so I might only use 20ish input tokens per prompt, where Gemini is having to put out several hundred, which would seem to affect the economics quite a bit
replies(3): >>45053249 #>>45053904 #>>45055059 #
1. red2awn ◴[] No.45055059[source]
It also didn't take into account a lot of the new models are reasoning models which spits out a lot of output tokens.