For the most part, LLMs just remember.
They don't think or learn or create on their own --- at least not anywhere close to a human level. Otherwise, they wouldn't require so much "training"
Essentially, they are best characterized as a huge database with a natural language interface.
Once the internet had been consumed and indexed, this sort of approach starts to hit a wall. There is no more data readily available for "training".
I don't know what the next breakthrough will be but I firmly believe one will be required to push performance to any significantly higher level.
replies(1):