←back to thread

625 points lukebennett | 1 comments | | HN request time: 0s | source
Show context
irrational ◴[] No.42139106[source]
> The AGI bubble is bursting a little bit

I'm surprised that any of these companies consider what they are working on to be Artificial General Intelligences. I'm probably wrong, but my impression was AGI meant the AI is self aware like a human. An LLM hardly seems like something that will lead to self-awareness.

replies(18): >>42139138 #>>42139186 #>>42139243 #>>42139257 #>>42139286 #>>42139294 #>>42139338 #>>42139534 #>>42139569 #>>42139633 #>>42139782 #>>42139855 #>>42139950 #>>42139969 #>>42140128 #>>42140234 #>>42142661 #>>42157364 #
og_kalu ◴[] No.42139257[source]
At this point, AGI means many different things to many different people but OpenAI defines it as "highly autonomous systems that outperform humans in most economically valuable tasks"
replies(1): >>42139793 #
troupo ◴[] No.42139793[source]
This definition suits OpenAI because it lets them claim AGI after reaching an arbitrary goal.

LLMs already outperform humans in a huge variety of tasks. ML in general outperform humans in a large variety of tasks. Are all of them AGI? Doubtful.

replies(4): >>42140183 #>>42140687 #>>42141745 #>>42172995 #
1. og_kalu ◴[] No.42140183{3}[source]
No, it's just a far more useful definition that is actionable and measurable. Not "consciousness" or "self-awareness" or similar philosophical things. The definition on Wikipedia doesn't talk about that either. People working on this by and large don't want to deal with vague, ill-defined concepts that just make people argue around in circles. It's not an Open AI exclusive thing.

If it acts like one, whether you call a machine conscious or not is pure semantics. Not like potential consequences are any less real.

>LLMs already outperform humans in a huge variety of tasks.

Yes, LLMs are General Intelligences and if that is your only requirement for AGI, they certainly already are[0]. But the definition above hinges on long-horizon planning and competence levels that todays models have generally not yet reached.

>ML in general outperform humans in a large variety of tasks.

This is what the G in AGI is for. Alphafold doesn't do anything but predict proteins. Stockfish doesn't do anything but play chess.

>Are all of them AGI? Doubtful.

Well no, because they're missing the G.

[0] https://www.noemamag.com/artificial-general-intelligence-is-...