←back to thread

GPT-5.2

(openai.com)
1019 points atgctg | 8 comments | | HN request time: 0.813s | source | bottom
Show context
sigmar ◴[] No.46235197[source]
Are there any specifics about how this was trained? Especially when 5.1 is only a month old. I'm a little skeptical of benchmarks these days and wish they put this up on llmarena

edit: noticed 5.2 is ranked in the webdev arena (#2 tied with gemini-3.0-pro), but not yet in text arena (last update 22hrs ago)

replies(2): >>46235312 #>>46235510 #
1. emp17344 ◴[] No.46235510[source]
I’m extremely skeptical because of all those articles claiming OpenAI was freaking out about Gemini - now it turns out they just casually had a better model ready to go? I don’t buy it.
replies(4): >>46235534 #>>46236858 #>>46239622 #>>46240400 #
2. tempaccount420 ◴[] No.46235534[source]
They had to rush it out, I'm sure the internal safety folks are not happy about it.
3. Workaccount2 ◴[] No.46236858[source]
I (and others) have a strong suspicion that they can modulate models intelligence in almost real time by adjusting quantization and thinking time.

It seems if anyone wants, they can really gas a model up in the moment and back it off after the hype wave.

replies(2): >>46239626 #>>46239819 #
4. bamboozled ◴[] No.46239622[source]
It's very inline with their PR strategy, or lack of.
5. bamboozled ◴[] No.46239626[source]
Yeah I've noticed with Claude, around the time of the Opus 4.5 release, at least for a few days, Sonnet 4.5 was just dumb, but it seems temporary. I feel that redirected resources to Opus.
6. qeternity ◴[] No.46239819[source]
Quantization is not some magical dial you can just turn. In practice you basically have 3 choices: fp16, fp8 and fp4.

Also thinking time means more tokens which costs more especially at the API level where you are paying per token and would be trivially observable.

There is basically no evidence that either of these are occurring in the way you suggest (boosting up and down).

replies(1): >>46240543 #
7. robots0only ◴[] No.46240400[source]
how do you know this is a better model? I wouldn't take any of the numbers at face value especially when all they have done is more/better post-training and thus the base pre-trained model capabilities is still the same. The model may just elicit some of the benchmark capabilities better. You really need to spend time using the model to come to any reliable conclusions.
8. Workaccount2 ◴[] No.46240543{3}[source]
API users probably wouldn't be affected since they are paying in full. Most people complaining are free users, followed by $20/mo users.