Always fun to see a theoretical argument that something clearly already happening is impossible.
replies(2):
There's plenty more room to grow with agents and tooling, but the core models are only slightly bumping YoY rather than the rocketship changes of 2022/23.
If work produced by LLMs forever has to be checked for accuracy, the applicability will be limited.
This is perhaps analogous to all the "self-driving cars" that still have to be monitored by humans, and in that case the self-driving system might as well not exist at all.