No, it's just a far more useful definition that is actionable and measurable. Not "consciousness" or "self-awareness" or similar philosophical things. The definition on Wikipedia doesn't talk about that either. People working on this by and large don't want to deal with vague, ill-defined concepts that just make people argue around in circles. It's not an Open AI exclusive thing.
If it acts like one, whether you call a machine conscious or not is pure semantics. Not like potential consequences are any less real.
>LLMs already outperform humans in a huge variety of tasks.
Yes, LLMs are General Intelligences and if that is your only requirement for AGI, they certainly already are[0]. But the definition above hinges on long-horizon planning and competence levels that todays models have generally not yet reached.
>ML in general outperform humans in a large variety of tasks.
This is what the G in AGI is for. Alphafold doesn't do anything but predict proteins. Stockfish doesn't do anything but play chess.
>Are all of them AGI? Doubtful.
Well no, because they're missing the G.
[0] https://www.noemamag.com/artificial-general-intelligence-is-...