←back to thread

183 points WolfOliver | 1 comments | | HN request time: 0.261s | source
Show context
crooked-v ◴[] No.45066121[source]
For me it's simple: even the best models are "lazy" and will confidently declare they're finished when they're obviously not, and the immensely increased amount of training effort to get ChatGPT 5's mild improvements on benchmarks suggests that that quality won't go away anytime soon.
replies(2): >>45066370 #>>45066507 #
anthonypasq ◴[] No.45066507[source]
gpt-5 is extremely cheap, what makes you think they couldn't produce a larger, smarter, more expensive model?

gpt-5 was created to be able to service 200m daily active users.

replies(1): >>45066831 #
bakugo ◴[] No.45066831[source]
> what makes you think they couldn't produce a larger, smarter, more expensive model?

Because they already did try making a much larger, more expensive model, it was called GPT-4.5. It failed, it wasn't actually that much smarter despite being insanely expensive, and they retired it after a few months.

replies(1): >>45069083 #
anthonypasq ◴[] No.45069083[source]
that was not a reasoning model.
replies(1): >>45069975 #
1. ForHackernews ◴[] No.45069975[source]
None of them are reasoning models. Some of them have a loop of word-outputting.