←back to thread

625 points lukebennett | 6 comments | | HN request time: 0.001s | source | bottom
Show context
irrational ◴[] No.42139106[source]
> The AGI bubble is bursting a little bit

I'm surprised that any of these companies consider what they are working on to be Artificial General Intelligences. I'm probably wrong, but my impression was AGI meant the AI is self aware like a human. An LLM hardly seems like something that will lead to self-awareness.

replies(18): >>42139138 #>>42139186 #>>42139243 #>>42139257 #>>42139286 #>>42139294 #>>42139338 #>>42139534 #>>42139569 #>>42139633 #>>42139782 #>>42139855 #>>42139950 #>>42139969 #>>42140128 #>>42140234 #>>42142661 #>>42157364 #
1. og_kalu ◴[] No.42139257[source]
At this point, AGI means many different things to many different people but OpenAI defines it as "highly autonomous systems that outperform humans in most economically valuable tasks"
replies(1): >>42139793 #
2. troupo ◴[] No.42139793[source]
This definition suits OpenAI because it lets them claim AGI after reaching an arbitrary goal.

LLMs already outperform humans in a huge variety of tasks. ML in general outperform humans in a large variety of tasks. Are all of them AGI? Doubtful.

replies(4): >>42140183 #>>42140687 #>>42141745 #>>42172995 #
3. og_kalu ◴[] No.42140183[source]
No, it's just a far more useful definition that is actionable and measurable. Not "consciousness" or "self-awareness" or similar philosophical things. The definition on Wikipedia doesn't talk about that either. People working on this by and large don't want to deal with vague, ill-defined concepts that just make people argue around in circles. It's not an Open AI exclusive thing.

If it acts like one, whether you call a machine conscious or not is pure semantics. Not like potential consequences are any less real.

>LLMs already outperform humans in a huge variety of tasks.

Yes, LLMs are General Intelligences and if that is your only requirement for AGI, they certainly already are[0]. But the definition above hinges on long-horizon planning and competence levels that todays models have generally not yet reached.

>ML in general outperform humans in a large variety of tasks.

This is what the G in AGI is for. Alphafold doesn't do anything but predict proteins. Stockfish doesn't do anything but play chess.

>Are all of them AGI? Doubtful.

Well no, because they're missing the G.

[0] https://www.noemamag.com/artificial-general-intelligence-is-...

4. ishtanbul ◴[] No.42140687[source]
Yes but they arent very autonomous. They can answer questions very well but can’t use that information to further goals. Thats what openai seems to be implying >> very smart and agentic AI
5. fragmede ◴[] No.42141745[source]
It's not just marketing bullshit though. Microsoft is the counterparty to a contract with that claim. money changes hands when that's been achieved, so I expect if sama thinks he's hit it, but Microsoft does not, we'll see that get argued in a court of law.
6. snapcaster ◴[] No.42172995[source]
At least it's a testable measurable definition. Everyone else seems to be down boring linguistic rabbit holes or nonstop goal post moving