←back to thread

2127 points bakugo | 1 comments | | HN request time: 0.21s | source
Show context
azinman2 ◴[] No.43163378[source]
To me the biggest surprise was seeking grok dominate in all of their published benchmarks. I haven’t seen any benchmarks of it yet (which I take with a giant heap of salt), but it’s still interesting nevertheless.

I’m rooting for Anthropic.

replies(4): >>43163397 #>>43163430 #>>43163485 #>>43163938 #
phillipcarter ◴[] No.43163430[source]
Neither a statement for or against Grok or Anthropic:

I've now just taken to seeing benchmarks as pretty lines or bars on a chart that are in no way reflective of actual ability for my use cases. Claude has consistently scored lower on some benchmarks for me, but when I use it in a real-world codebase, it's consistently been the only one that doesn't veer off course or "feel wrong". The others do. I can't quantify it, but that's how it goes.

replies(1): >>43163491 #
vessenes ◴[] No.43163491[source]
O1 pro is excellent at figuring out complex stuff that Claude misses. It’s my go to mid level debug assistant when Claude spins
replies(3): >>43167331 #>>43169432 #>>43173437 #
1. maeil ◴[] No.43167331[source]
Ive found the same but find o3-mini just as good as that. Sonnet is far better as a general model, but when it's an open-ended technical question that isn't just about code, o3-mini figures it out while Sonnet sometimes doesn't. In those cases o3 is less inclined to go with purely the most "obvious" answer when it's wrong.