Most active commenters
  • giancarlostoro(4)
  • kelnos(4)
  • Rover222(3)

←back to thread

504 points Terretta | 51 comments | | HN request time: 1.266s | source | bottom
1. boole1854 ◴[] No.45064512[source]
It's interesting that the benchmark they are choosing to emphasize (in the one chart they show and even in the "fast" name of the model) is token output speed.

I would have thought it uncontroversial view among software engineers that token quality is much important than token output speed.

replies(14): >>45064582 #>>45064587 #>>45064594 #>>45064616 #>>45064622 #>>45064630 #>>45064757 #>>45064772 #>>45064950 #>>45065131 #>>45065280 #>>45065539 #>>45067136 #>>45077061 #
2. eterm ◴[] No.45064582[source]
It depends how fast.

If an LLM is often going to be wrong anyway, then being able to try prompts quickly and then iterate on those prompts, could possibly be more valuable than a slow higher quality output.

Ad absurdum, if it could injest and work on an entire project in milliseconds, then it has mucher geater value to me, than a process which might take a day to do the same, even if the likelihood of success is also strongly affected.

It simply enables a different method of interactive working.

Or it could supply 3 different suggestions in-line while working on something, rather than a process which needs to be explicitly prompted and waited on.

Latency can have critical impact on not just user experience but the very way tools are used.

Now, will I try Grok? Absolutely not, but that's a personal decision due to not wanting anything to do with X, rather than a purely rational decision.

replies(3): >>45064736 #>>45064784 #>>45064870 #
3. 6r17 ◴[] No.45064587[source]
Tbh I kind of disagree ; there are certain use-cases were legitimately speed would be much more interesting such as generating a massive amount of HTML. Tough I agree this makes it look like even more of a joke for anything serious.

They reduce the costs tough !

4. jsheard ◴[] No.45064594[source]
That's far from the worst metric that xAI has come up with...

https://xcancel.com/elonmusk/status/1958854561579638960

replies(1): >>45066065 #
5. esafak ◴[] No.45064616[source]
I agree. Coding faster than humans can review it is pointless. Between fast, good, and cheap, I'd prioritize good and cheap.

Fast is good for tool use and synthesizing the results.

6. peab ◴[] No.45064622[source]
depends for what.

For autocompleting simple functions (string manipulation, function definitions, etc), the quality bar is pretty easy to hit, and speed is important.

If you're just vibe coding, then yeah, you want quality. But if you know what you're doing, I find having a dumber fast model is often nicer than a slow smart model that you still need to correct a bit, because it's easier to stay in flow state.

With the slow reasoning models, the workflow is more like working with another engineer, where you have to review their code in a PR

7. jml78 ◴[] No.45064630[source]
To a point. If gpt5 takes 3 minutes to output and qwen3 does it in 10 seconds and the agent can iterate 5 times to finish before gpt5, why do I care if gpt5 one shot it and qwen took 5 iterations
replies(2): >>45065130 #>>45074590 #
8. postalcoder ◴[] No.45064736[source]
Besides being a faster slot machine, to the extent that they're any good, a fast agentic LLM would be very nice to have for codebase analysis.
replies(1): >>45067357 #
9. furyofantares ◴[] No.45064757[source]
Fast can buy you a little quality by getting more inference on the same task.

I use Opus 4.1 exclusively in Claude Code but then I also use zen-mcp server to get both gpt5 and gemini-2.5-pro to review the code and then Opus 4.1 responds. I will usually have eyeballed the code somewhere in the middle here but I'm not fully reviewing until this whole dance is done.

I mean, I obviously agree with you in that I've chosen the slowest models available at every turn here, but my point is I would be very excited if they also got faster because I am using a lot of extra inference to buy more quality before I'm touching the code myself.

replies(1): >>45065042 #
10. giancarlostoro ◴[] No.45064772[source]
I'm more curious if its based on Grok 3 or what, I used to get reasonable answers from Grok 3. If that's the case, the trick that works for Grok and basically any model out there is to ask for things in order and piecemeal, not all at once. Some models will be decent at the 'all at once' approach, but when me and others have asked it in steps it gave us much better output. I'm not yet sure how I feel about Grok 4, have not really been impressed by it.
11. giancarlostoro ◴[] No.45064784[source]
> If an LLM is often going to be wrong anyway, then being able to try prompts quickly and then iterate on those prompts, could possibly be more valuable than a slow higher quality output.

Asking any model to do things in steps is usually better too, as opposed to feeding it three essays.

replies(1): >>45064995 #
12. 34679 ◴[] No.45064870[source]
>If an LLM is often going to be wrong anyway, then being able to try prompts quickly and then iterate on those prompts, could possibly be more valuable than a slow higher quality output.

Before MoE was a thing, I built what I called the Dictator, which was one strong model working with many weaker ones to achieve a similar result as MoE, but all the Dictator ever got was Garbage In, so guess what came out?

replies(3): >>45065169 #>>45068763 #>>45073448 #
13. ffsm8 ◴[] No.45064995{3}[source]
I thought the current vibe was doing the former to produce the latter and then use the output as the task plan?
replies(1): >>45065164 #
14. dotancohen ◴[] No.45065042[source]

  > I use Opus 4.1 exclusively in Claude Code but then I also use zen-mcp server to get both gpt5 and gemini-2.5-pro to review the code and then Opus 4.1 responds.
I'd love to hear how you have this set up.
replies(1): >>45065107 #
15. mchusma ◴[] No.45065107{3}[source]
This is a nice setup. I wonder how much it helps in practice? I suspect most of the problems opus has for me are more context related, and I’m not sure more models would help. Speculation on my part.
16. wahnfrieden ◴[] No.45065130[source]
It doesn’t though. Fast but dumb models don’t progressively get better with more iterations.
replies(2): >>45065701 #>>45067855 #
17. M4v3R ◴[] No.45065131[source]
Speed absolutely matters. Of course if the quality is trash then it doesn't matter, but a model that's on par with Claude Sonnet 4 AND very speedy would be an absolute game changer in agentic coding. Right now you craft a prompt, hit send and then wait, and wait, and then wait some more, and after some time (anywhere from 30 seconds to minutes later) the agent finishes its job.

It's not long enough for you to context switch to something else, but long enough to be annoying and these wait times add up during the whole day.

It also discourages experimentation if you know that every prompt will potentially take multiple minutes to finish. If it instead finished in seconds then you could iterate faster. This would be especially valuable in the frontend world where you often tweak your UI code many times until you're satisfied with it.

18. giancarlostoro ◴[] No.45065164{4}[source]
I don't know what other people are doing, I mostly use LLMs:

* Scaffolding

* Ask it what's wrong with the code

* Ask it for improvements I could make

* Ask it what the code does (amazing for old code you've never seen)

* Ask it to provide architect level insights into best practices

One area where they all seem to fail is lesser known packages they tend to either reference old functionality that is not there anymore, or never was, they hallucinate. Which is part of why I don't ask it for too much.

Junie did impress me, but it was very slow, so I would love to see a version of Junie using this version of Grok, it might be worthwhile.

replies(3): >>45067042 #>>45067401 #>>45067478 #
19. _kb ◴[] No.45065169{3}[source]
You just need to scale out more. As you approach infinite monkeys, sorry - models, you'll surely get the result you need.
replies(1): >>45067012 #
20. defen ◴[] No.45065280[source]
> I would have thought it uncontroversial view among software engineers that token quality is much important than token output speed.

We already know that in most software domains, fast (as in, getting it done faster) is better than 100% correct.

21. ojosilva ◴[] No.45065539[source]
After trying Cerebras free API (not affiliated) which delivers Qwen Coder 480b and gpt-oss-120b a mind boggling ~3000 tps, that output speed is the first thing I checked out when considering a model for speed. I just wish Cerebras had a better overall offering on their cloud, usage is capped at 70M tokens / day and people are reporting that it's easily hit and highly crippling for daily coding.
replies(1): >>45076951 #
22. dmix ◴[] No.45065701{3}[source]
That very much depends on the usecase

Different models for different things.

Not everyone is solving complicated things every time they hit cmd-k in Cursor or use autocomplete, and they can easily switch to a different model when working harder stuff out via longer form chat.

23. Rover222 ◴[] No.45066065[source]
what's wrong with rapid updates to an app?
replies(5): >>45067028 #>>45067061 #>>45068102 #>>45069218 #>>45070365 #
24. dingnuts ◴[] No.45067012{4}[source]
why's this guy getting downvoted? SamA says we need a Dyson Sphere made of GPUs surrounding the solar system and people take it seriously but this guy takes a little piss out of that attitude and he's downvoted?

this site is the fucking worst

replies(1): >>45070318 #
25. cosmicgadget ◴[] No.45067028{3}[source]
They aren't a metric for showing you are better than the competition.
replies(1): >>45068209 #
26. dingnuts ◴[] No.45067042{5}[source]
> amazing for old code you've never seen

not if you have too much! a few hundred thousand lines of code and you can't ask shit!

plus, you just handed over your company's entire IP to whoever hosts your model

replies(2): >>45067425 #>>45068396 #
27. ori_b ◴[] No.45067061{3}[source]
It's like measuring how fast your car can go by counting how often you clean the upholstery.

There's nothing wrong with doing it, but it's entirely unrelated to performance.

replies(1): >>45068200 #
28. CuriouslyC ◴[] No.45067136[source]
For agentic workflows, speed and good tool use are the most important thing. Agents should use tools for things by design, and that can include reasoning tools and oracles. The agent doesn't need to be smart, it just needs a line to someone who is that can give the agent a hyper-detailed plan to follow.
29. fmbb ◴[] No.45067357{3}[source]
For 10% less time you can get 10% worse analysis? I don’t understand the tradeoff.
replies(1): >>45070324 #
30. miohtama ◴[] No.45067401{5}[source]
I hope in the future tooling and MCP will be better so agents can directly check what functionality exists in the installed package version instead of hallucinations.
31. miohtama ◴[] No.45067425{6}[source]
It's a fair trade off for smaller companies where IP or the software is necessary evil, not the main unique value added. It's hard to see what evil would anyone do with crappy legacy code.

The IP risks taken may be well worth of productiviry boosts.

32. ffsm8 ◴[] No.45067478{5}[source]
> Ask it what's wrong with the code

That's phase 1, ask it to "think deeply" (Claude keyword, only works with the anthropic models) while doing that. Then ask it to make a detailed plan of solving the issue and write that into current-fix.md and ask it to add clearly testable criteria when the issuen is solved.

Now you manually check the criteria wherever they sound plausible, if not - it's analysis failed and its output was worthless.

But if it sounds good, you can then start a new session and ask it to read the-markdown-file and implement the change.

Now you can plausibility check the diff and are likely done

But as the sister comment pointed out, agentic coding really breaks apart with large files like you usually have in brownfield projects.

33. Jcampuzano2 ◴[] No.45067855{3}[source]
There are many ways to skin a cat.

Often all it takes is to reset to a checkpoint or undo and adjust the prompt a bit with additional context and even dumber models can get things right.

I've used grok code fast plenty this week alongside gpt 5 when I need to pull out the big guns and it's refreshing using a fast model for smaller changes or for tasks that are tedious but repetitive during things like refactoring.

replies(1): >>45068076 #
34. wahnfrieden ◴[] No.45068076{4}[source]
Yes fast/dumb models are useful! But that's not what OP said - they said they can be as useful as the large models by iterating them.

Do you use them successfully in cases where you just had to re-run them 5 times to get a good answer, and was that a better experience than going straight to GPT 5?

35. tzs ◴[] No.45068102{3}[source]
See the reply, currently at #2 on that Twitter thread, from Jamie Voynow.
36. Rover222 ◴[] No.45068200{4}[source]
I don't think he was saying their release cadence is a direct metric on their model performance. Just that the team iterates and improves the app user experience much more quickly than on other teams.
replies(3): >>45068606 #>>45068692 #>>45070385 #
37. Rover222 ◴[] No.45068209{4}[source]
It's a metric for showing you can move more quickly on product improvements. Anyone who has worked on a product team at a large tech company knows how much things get slowed down by process bloat.
38. giancarlostoro ◴[] No.45068396{6}[source]
If Apple keeps improving things, you can run the model locally. I'm able to run models on my Macbook with an M4 that I can't even run on my 3080 GPU (mostly due to VRAM constraints) but they run reasonably fast, would the 3080 be faster? Sure, but its also plenty fast to where I'm not sitting there waiting longer than I wait for a cloud model to "reason" and look things up.

I think the biggest thing for offline LLMs will have to be consistency for having them search the web with an API like Google's or some other search engines API, maybe Kagi could provide an API for people who self-host LLMs (not necessarily for free, but it would still be useful).

39. jdiff ◴[] No.45068606{5}[source]
He seems to be stating that app release cadence correlates with internal upgrades that correlate with model performance. There is no reason for this to be true. He does not seem to be talking about user experience.
40. ori_b ◴[] No.45068692{5}[source]
It's a fucking chat. How many times a day do you need to ship an update?
41. charcircuit ◴[] No.45068763{3}[source]
That doesn't seem similar to MoE at all.
replies(1): >>45073958 #
42. LeafItAlone ◴[] No.45069218{3}[source]
I have a coworker who outshines everybody else in number of commits and pushes in any given time period. It’s pretty amazing the number they can accomplish!

Of course, 95% of them are fixing things they broke in earlier commits and their overall quality is the worst on the team. But, holy cow, they can output crap faster than anyone I’ve seen.

43. kelnos ◴[] No.45070318{5}[source]
Maybe because this site is full of people with differing opinions and stances on things, and react differently to what people say and do?

Not sure who was taking SamA seriously about that; personally I think he's a ridiculous blowhard, and statements like that just reinforce that view for me.

Please don't make generalizations about HN's visitors'/commenters' attitudes on things. They're never generally correct.

44. kelnos ◴[] No.45070324{4}[source]
I mean, if that's literally what the numbers are, sure, maybe that's not great. But what if it's 10% less time and 3% worse analysis? Maybe that's valuable.
45. kelnos ◴[] No.45070365{3}[source]
That metric doesn't really tell you anything. Maybe I'm making rapid updates to my app because I'm a terrible coder and I keep having to push out fixes to critical bugs. Maybe I'm bored and keep making little tweaks to the UI, and for some reason think that's worth people's time to upgrade. (And that's another thing: frequent upgrades can be annoying!)

But sure, ok, maybe it could mean making much faster progress than competitors. But then again, it could also mean that competitors have a much more mature platform, and you're only releasing new things so often because you're playing catch-up.

(And note that I'm not specifically talking about LLMs here. This metric is useless for pretty much any kind of app or service.)

46. kelnos ◴[] No.45070385{5}[source]
Oh c'mon, I know it's usually best to try to interpret things in the most charitable way possible, but clearly Musk was implying the actual meat of things, the model itself, is what's being constantly improved.

But even if your interpretation is correct, frequency of releases still is not a good metric. That could just mean that you have a lot to fix, and/or you keep breaking and fixing things along the way.

47. LinXitoW ◴[] No.45073448{3}[source]
Sounds more like a Mixture of Idiots.
48. 34679 ◴[] No.45073958{4}[source]
Well, I really didn't provide sufficient detail to make that determination either way.
49. ant6n ◴[] No.45074590[source]
ChaptGPT5 takes 5 times the time to finish, and still produces garbage.
50. scottyeager ◴[] No.45076951[source]
They have a "max" plan with 120m tokens/day limit for $200/month: https://www.cerebras.ai/blog/introducing-cerebras-code
51. scottyeager ◴[] No.45077061[source]
Fast inference can change the entire dynamic or working with these tools. At the typical speeds, I usually try to do something else while the model works. When the model works really fast, I can easily wait for it to finish.

So the total difference includes the cost of context switching, which is big.

Potentially speed matters less in a scenario that is focused on more autonomous agents running in the background. However I think most usage is still highly interactive these days.