I an not an expert but I have a serious counterpoint.
While training LLMs to replicate the human output, the intelligence and understanding EMERGES in the internal layers.
It seems trivial to do unsupervised training on scientific data, for instance, such as star movements, and discover closed-form analytic models for their movements. Deriving Kepler’s laws and Newton’s equations should be fast and trivial, and by that afternoon you’d have much more profound models with 500+ variables which humans would struggle to understand but can explain the data.
AGI is what, Artificial General Intelligence? What exactly do we mean by general? Mark Twain said “we are all idiots, just on different subjects”. These LLMs are already better than 90% of humans at understanding any subject, in the sense of answering questions about that subject and carrying on meaningful and reasonable discussion. Yes occasionally they stumble or make a mistake, but overall it is very impressive.
And remember — if we care about practical outcomes - as soon as ONE model can do something, ALL COPIES OF IT CAN. So you can reliably get unlimited agents that are better than 90% of humans at understanding every subject. That is a very powerful baseline for replacing most jobs, isn’t it?