"An AGI is a human-created system that demonstrates iteratively improving its own conceptual design without further human assistance".
Note that a "conceptual design" here does not include tweaking weights within an already-externally-established formula.
My reasoning is thus:
1. A system that is only capable of acting with human assistance cannot have its own intelligence disentangled from the humans'
2. A system that is only intelligent enough to solve problems that somehow exclude problems with itself is not "generally" intelligent
3. A system that can only generate a single round of improvements to its own designs has not demonstrated improvements to those designs, as if iteration N+1 were truly superior to iteration N, it would be able to produce iteration N+2
4. A system that is not capable of changing its own design is incapable of iterative improvement, as there is a maximum efficacy within any single framework
5. A system that could improve itself in theory and fails to do so in practice has not demonstrated intelligence
It's pretty clear that no current-day system has hit this milestone; if some program had, there would no longer be a need for continued investment in algorithms design (or computer science, or most of humanity...).
A program that randomly mutates its own code could self-improve in theory but fails to do so in practice.
I don't think these goalposts have moved in the past or need to move in the future. This is what it takes to cause the singularity. The movement recently has been people trying to sell something less than this as an AGI.
I feel this definition doesn't require a current LLM model to be able to change its own working but to be able to generate a guided next generation.
It's possible that LLMs can surpass human beings, purely because I believe we will inevitably be limited to short term storage constraints which LLMs will not. It will be a bandwidth vs througput question. An LLM will have a much larger although slightly slower store of knowledge than what human have. But will be much quicker than a human looking up and validating the data.
We aren't there yet.
Selling something that does not yet exist is an essential part of capitalism, which - according to the main thesis of philosophical Accelerationism - is (teleologically) identical to AI. [0] It's sometimes referred to as Hyperstition, i.e. fictions that make themselves real.
Most people have a rough idea of what AGI means, but we still haven't figured out an exact definition that lines up with reality. As we continue exploring the idea space, we'll keep figuring out which parameters place boundaries and requirements on what AGI means.
There's no reason to just accept an ancient definition from someone who was confused and didn't know any better at the time when they invented their definition. Older definitions were just shots in the dark that pointed in a general direction, but there's no guarantee that they would hit upon the exact destination.