←back to thread

504 points Terretta | 4 comments | | HN request time: 0.406s | source
Show context
boole1854 ◴[] No.45064512[source]
It's interesting that the benchmark they are choosing to emphasize (in the one chart they show and even in the "fast" name of the model) is token output speed.

I would have thought it uncontroversial view among software engineers that token quality is much important than token output speed.

replies(14): >>45064582 #>>45064587 #>>45064594 #>>45064616 #>>45064622 #>>45064630 #>>45064757 #>>45064772 #>>45064950 #>>45065131 #>>45065280 #>>45065539 #>>45067136 #>>45077061 #
jml78 ◴[] No.45064630[source]
To a point. If gpt5 takes 3 minutes to output and qwen3 does it in 10 seconds and the agent can iterate 5 times to finish before gpt5, why do I care if gpt5 one shot it and qwen took 5 iterations
replies(2): >>45065130 #>>45074590 #
1. wahnfrieden ◴[] No.45065130[source]
It doesn’t though. Fast but dumb models don’t progressively get better with more iterations.
replies(2): >>45065701 #>>45067855 #
2. dmix ◴[] No.45065701[source]
That very much depends on the usecase

Different models for different things.

Not everyone is solving complicated things every time they hit cmd-k in Cursor or use autocomplete, and they can easily switch to a different model when working harder stuff out via longer form chat.

3. Jcampuzano2 ◴[] No.45067855[source]
There are many ways to skin a cat.

Often all it takes is to reset to a checkpoint or undo and adjust the prompt a bit with additional context and even dumber models can get things right.

I've used grok code fast plenty this week alongside gpt 5 when I need to pull out the big guns and it's refreshing using a fast model for smaller changes or for tasks that are tedious but repetitive during things like refactoring.

replies(1): >>45068076 #
4. wahnfrieden ◴[] No.45068076[source]
Yes fast/dumb models are useful! But that's not what OP said - they said they can be as useful as the large models by iterating them.

Do you use them successfully in cases where you just had to re-run them 5 times to get a good answer, and was that a better experience than going straight to GPT 5?