←back to thread

GPT-5.2

(openai.com)
1019 points atgctg | 10 comments | | HN request time: 0.721s | source | bottom
Show context
system2 ◴[] No.46234981[source]
"Investors are putting pressure, change the version number now!!!"
replies(1): >>46235001 #
1. exe34 ◴[] No.46235001[source]
I'm quite sad about the S-curve hitting us hard in the transformers. For a short period, we had the excitement of "ooh if GPT-3.5 is so good, GPT-4 is going to be amazing! ooh GPT-4 has sparks of AGI!" But now we're back to version inflation for inconsequential gains.
replies(4): >>46235029 #>>46235236 #>>46235245 #>>46235399 #
2. verdverm ◴[] No.46235029[source]
2025 is the year most Big AI released their first real thinking models

Now we can create new samples and evals for more complex tasks to train up the next gen, more planning, decomp, context, agentic oriented

OpenAI has largely fumbled their early lead, exciting stuff is happening elsewhere

3. ToValueFunfetti ◴[] No.46235236[source]
Take this all with a grain of salt as it's hearsay:

From what I understand, nobody has done any real scaling since the GPT-4 era. 4.5 was a bit larger than 4, but not as much as the orders of magnitude difference between 3 and 4, and 5 is smaller than 4.5. Google and Anthropic haven't gone substantially bigger than GPT-4 either. Improvements since 4 are almost entirely from reasoning and RL. In 2026 or 2027, we should see a model that uses the current datacenter buildout and actually scales up.

replies(2): >>46235487 #>>46235961 #
4. JanSt ◴[] No.46235245[source]
I don't feel the S-curve at all yet. Still an exponential for me
replies(1): >>46238642 #
5. gessha ◴[] No.46235399[source]
Because it will take thousands of underpaid researchers random searching through solution space to get to the next improvement, not 2-3 companies pressed to monetize and enshittify their product before money runs out. That and winning more hardware lotteries.
replies(1): >>46239328 #
6. snovv_crash ◴[] No.46235487[source]
Datacenter capacity is being snapped up for inference too though.
7. Leynos ◴[] No.46235961[source]
4.5 is widely believed to be an order of magnitude larger than GPT-4, as reflected in the API inference cost. The problem is the quantity of parameters you can fit in the memory of one GPU. Pretty much every large GPT model from 4 onwards has been mixture of experts, but for a 10 trillion parameter scale model, you'd be talking a lot of experts and a lot of inter-GPU communication.

With FP4 in the Blackwell GPUs, it should become much more practical to run a model of that size at the deployment roll-out of GPT-5.x. We're just going to have to wait for the GBx00 systems to be physically deployed at scale.

8. exe34 ◴[] No.46238642[source]
With a very long doubling time?
9. astrange ◴[] No.46239328[source]
Underpaid? OpenAI!? It's pretty good I think.

https://www.levels.fyi/companies/openai/salaries/software-en...

replies(1): >>46240665 #
10. gessha ◴[] No.46240665{3}[source]
I’m talking about grad students, not OpenAI researchers.