Most active commenters

    ←back to thread

    GPT-5.2

    (openai.com)
    1019 points atgctg | 12 comments | | HN request time: 0.001s | source | bottom
    Show context
    jumploops ◴[] No.46235526[source]
    > “a new knowledge cutoff of August 2025”

    This (and the price increase) points to a new pretrained model under-the-hood.

    GPT-5.1, in contrast, was allegedly using the same pretraining as GPT-4o.

    replies(5): >>46236646 #>>46236703 #>>46236791 #>>46237182 #>>46240048 #
    1. FergusArgyll ◴[] No.46236791[source]
    A new pretrain would definitely get more than a .1 version bump & would get a whole lot more hype I'd think. They're expensive to do!
    replies(8): >>46237036 #>>46237044 #>>46237046 #>>46237207 #>>46237270 #>>46239181 #>>46239733 #>>46241840 #
    2. femiagbabiaka ◴[] No.46237036[source]
    Not if they didn't feel that it delivered customer value no? It's about under promising and over delivering, in every instance
    3. redwood ◴[] No.46237044[source]
    Not if it underwhelms
    4. hannesfur ◴[] No.46237046[source]
    Maybe they felt the increase in capability is not worth of a bigger version bump. Additionally pre-training isn't as important as it used to be. Most of the advances we see now probably come from the RL stage.
    5. caconym_ ◴[] No.46237207[source]
    Releasing anything as "GPT-6" which doesn't provide a generational leap in performance would be a PR nightmare for them, especially after the underwhelming release of GPT-5.

    I don't think it really matters what's under the hood. People expect model "versions" to be indexed on performance.

    6. ACCount37 ◴[] No.46237270[source]
    Not necessarily. GPT-4.5 was a new pretrain on top of a sizeable raw model scale bump, and only got 0.5 - because the gains from reasoning training in o-series overshadowed GPT-4.5's natural advantage over GPT-4.

    OpenAI might have learned not to overhype. They already shipped GPT-5 - which was only an incremental upgrade over o3, and was received poorly, with this being a part of the reason why.

    replies(1): >>46240248 #
    7. boc ◴[] No.46239181[source]
    Maybe the rumors about failed training runs weren't wrong...
    8. jumploops ◴[] No.46239733[source]
    It’s possible they’re using some new architecture to get more up-to-date data, but I think that’d be even more of a headline.

    My hunch is that this is the same 5.1 post-training on a new pretrained base.

    Likely rushed out the door faster than they initially expected/planned.

    9. diego_sandoval ◴[] No.46240248[source]
    I jumped straight from 4o (free user) into GPT-5 (paid user).

    It was a generational leap if there ever has been one. Much bigger than 3.5 to 4.

    replies(2): >>46240767 #>>46242399 #
    10. kadushka ◴[] No.46240767{3}[source]
    What kind of improvements do you expect when going from 5 straight to 6?
    11. OrangeMusic ◴[] No.46241840[source]
    Yeah because OpenAI has been great at naming their models so far? ;)
    12. ACCount37 ◴[] No.46242399{3}[source]
    Yes, if OpenAI released GPT-5 after GPT-4o, then it would have been seen as a proper generational leap.

    But o3 existing and being good at what it does? Took the wind out of GPT-5's sails.