Edit: I never actually expected AGI from LLMs. That was snark. I just think it's notable that the fundamental gains in LLM performance seem to have dried up.
Edit: I never actually expected AGI from LLMs. That was snark. I just think it's notable that the fundamental gains in LLM performance seem to have dried up.
But why does this paper impact your thinking on it? It is about budget and recognizing that different LLMs have different cost structures. It's not really an attempt to improve LLM performance measured absolutely.
It's mostly hand waving, hype and credulity, and unproven claims of scalability right now.
You can't move the goal posts because they don't exist.
It'll be a while until the ability to move the goalposts of "actual intelligence" is exhausted entirely.
Doesn't mean there aren't practical definitions depending on the context.
In essence, teaching an AI using recources meant for humans, and nothing more, would be considered AGI. That could be a practical definition, without needing much more rigour.
There is indeed no evidence we'll get there. But there is also no evidence LLM's should work as well as they do