←back to thread

625 points lukebennett | 1 comments | | HN request time: 0.2s | source
1. summerlight ◴[] No.42143117[source]
I guess this is somewhat expected? The current frontier models probably already have exhausted most of the entropy in the training data accumulated over decades and the new training data is very sparse. And the current mainstream architectures are not capable of sophisticated searching and planning, essential aspects for generating new entropy out of thin air. o1 was an interesting attempt to tackle this problem, but we probably still have a long way to go.