←back to thread

507 points martinald | 7 comments | | HN request time: 0.82s | source | bottom
1. caminanteblanco ◴[] No.45053030[source]
Ok, one issue I have with this analysis is the breakdown between input and output tokens. I'm the kind of person who spend most of my chat asking questions, so I might only use 20ish input tokens per prompt, where Gemini is having to put out several hundred, which would seem to affect the economics quite a bit
replies(3): >>45053249 #>>45053904 #>>45055059 #
2. bcrosby95 ◴[] No.45053249[source]
Yeah, I've noticed Chatgpt5 is very chatty. I can ask a 1 sentence question and get back 3-4 paragraphs, most of which I ignore, depending upon the task.
replies(3): >>45053901 #>>45059033 #>>45059393 #
3. ozgung ◴[] No.45053901[source]
Same. It acts like its output tokens are for free. My input output ratio is like 1 to 10 at least. Not counting "Thought" and it's internal generation for agentic tasks.
4. pakitan ◴[] No.45053904[source]
It may hurt them financially but they are fighting for market share and I'd argue short answers will drive users away. I prefer the long ones much more as they often include things I haven't directly asked about but are still helpful.
5. red2awn ◴[] No.45055059[source]
It also didn't take into account a lot of the new models are reasoning models which spits out a lot of output tokens.
6. trashface ◴[] No.45059033[source]
Switch to Robot personality
7. solarkraft ◴[] No.45059393[source]
I haven’t used it without customization, but I find it follows my brevity user instructions more strictly.