←back to thread

GPT-5.2

(openai.com)
1019 points atgctg | 10 comments | | HN request time: 0.001s | source | bottom
Show context
josalhor ◴[] No.46235005[source]
From GPT 5.1 Thinking:

ARC AGI v2: 17.6% -> 52.9%

SWE Verified: 76.3% -> 80%

That's pretty good!

replies(7): >>46235062 #>>46235070 #>>46235153 #>>46235160 #>>46235180 #>>46235421 #>>46236242 #
verdverm ◴[] No.46235062[source]
We're also in benchmark saturation territory. I heard it speculated that Anthropic emphasizes benchmarks less in their publications because internally they don't care about them nearly as much as making a model that works well on the day-to-day
replies(5): >>46235126 #>>46235266 #>>46235466 #>>46235492 #>>46235583 #
1. Mistletoe ◴[] No.46235266[source]
How do you measure whether it works better day to day without benchmarks?
replies(3): >>46235305 #>>46235348 #>>46235398 #
2. standardUser ◴[] No.46235305[source]
Subscriptions.
replies(1): >>46236136 #
3. bulbar ◴[] No.46235348[source]
Manually labeling answers maybe? There exist a lot of infrastructure built around and as it's heavily used for 2 decades and it's relatively cheap.

That's still benchmarking of course, but not utilizing any of the well known / public ones.

4. verdverm ◴[] No.46235398[source]
Internal evals, Big AI certainly has good, proprietary training and eval data, it's one reason why their models are better
replies(1): >>46235532 #
5. aydyn ◴[] No.46235532[source]
Then publish the results of those internal evals. Public benchmark saturation isn't an excuse to be un-quantitative.
replies(1): >>46235607 #
6. verdverm ◴[] No.46235607{3}[source]
How would published numbers be useful without knowing what the underlying data being used to test and evaluate them are? They are proprietary for a reason

To think that Anthropic is not being intentional and quantitative in their model building, because they care less for the saturated benchmaxxing, is to miss the forest for the trees

replies(1): >>46236582 #
7. mrguyorama ◴[] No.46236136[source]
Ah yes, humans are famously empirical in their behavior and we definitely do not have direct evidence of the "best" sports players being much more likely than the average to be superstitious or do things like wear "lucky underwear" or buy right into scam bracelets that "give you more balance" using a holographic sticker.
8. aydyn ◴[] No.46236582{4}[source]
Do you know everything that exists in public benchmarks?

They can give a description of what their metrics are without giving away anything proprietary.

replies(1): >>46238542 #
9. verdverm ◴[] No.46238542{5}[source]
I'd recommend watching Nathan Lambert's video he dropped yesterday on Olmo 3 Thinking. You'll learn there's a lot of places where even descriptions of proprietary testing regimes would give away some secret sauce

Nathan is at Ai2 which is all about open sourcing the process, experience, and learnings along the way

replies(1): >>46241985 #
10. aydyn ◴[] No.46241985{6}[source]
Thanks for the reference I'll check it out. But it doesnt really take away from the point I am making. If a level of description would give away proprietary information, then go one level up to a more vague description. How to describe things to a proper level is more of a social problem than a technical one.