←back to thread

GPT-5.2

(openai.com)
1019 points atgctg | 9 comments | | HN request time: 0s | source | bottom
Show context
agentifysh ◴[] No.46238067[source]
Looks like they've begun censoring posts at r/Codex and not allowing complaint threads so here is my honest take:

- It is faster which is appreciated but not as fast as Opus 4.5

- I see no changes, very little noticeable improvements over 5.1

- I do not see any value in exchange for +40% in token costs

All in all I can't help but feel that OpenAI is facing an existential crisis. Gemini 3 even when its used from AI Studio offers close to ChatGPT Pro performance for free. Anthropic's Claude Code $100/month is tough to beat. I am using Codex with the $40 credits but there's been a silent increase in token costs and usage limitations.

replies(3): >>46238965 #>>46240393 #>>46241715 #
1. AstroBen ◴[] No.46238965[source]
Did you notice much improvement going from Gemini 2.5 to 3? I didn't

I just think they're all struggling to provide real world improvements

replies(8): >>46239052 #>>46239296 #>>46239714 #>>46240131 #>>46240302 #>>46240549 #>>46240983 #>>46241460 #
2. XCSme ◴[] No.46239052[source]
Maybe they are just more consistent, which is a bit hard to notice immediately.
3. dcre ◴[] No.46239296[source]
Nearly everyone else (and every measure) seems to have found 3 a big improvement over 2.5.
4. enraged_camel ◴[] No.46239714[source]
Gemini 3 was a massive improvement over 2.5, yes.
5. cmrdporcupine ◴[] No.46240131[source]
I think what they're actually struggling with is costs. And I think they're all behind the scenes quantizing models to manage load here and there, and they're all giving inconsistent results.

I noticed huge improvement from Sonnet 4.5 to Opus 4.5 when it became unthrottled a couple weeks ago. I wasn't going to sign back up with Anthropic but I did. But two weeks in it's already starting to seem to be inconsistent. And when I go back to Sonnet it feels like they did something to lobotomize it.

Meanwhile I can fire up DeepSeek 3.2 or GLM 4.6 for a fraction of the cost and get almost as good as results.

6. agentifysh ◴[] No.46240302[source]
oh yes im noticing significant improvements across the board but mainly having 1,000,000 token context makes a ton of difference, I can keep digging at a problem with out compaction.
7. free652 ◴[] No.46240549[source]
yes, 2.5 just couldnt use tools right. 3.0 is way better at coding. better than sonnet 4.5/
8. dudeinhawaii ◴[] No.46240983[source]
I noticed a quite noticeable improvement to the point where I made it my go-to model for questions. Coding-wise, not so much. As an intelligent model, writing up designs, investigations, general exploration/research tasks, it's top notch.
9. chillfox ◴[] No.46241460[source]
Gemini 3 Pro is the first model from Google that I have found usable, and it's very good. It has replaced Claude for me in some cases, but Claude is still my goto for use in coding agents.

(I only access these models via API)