←back to thread

625 points lukebennett | 1 comments | | HN request time: 0.2s | source
Show context
irrational ◴[] No.42139106[source]
> The AGI bubble is bursting a little bit

I'm surprised that any of these companies consider what they are working on to be Artificial General Intelligences. I'm probably wrong, but my impression was AGI meant the AI is self aware like a human. An LLM hardly seems like something that will lead to self-awareness.

replies(18): >>42139138 #>>42139186 #>>42139243 #>>42139257 #>>42139286 #>>42139294 #>>42139338 #>>42139534 #>>42139569 #>>42139633 #>>42139782 #>>42139855 #>>42139950 #>>42139969 #>>42140128 #>>42140234 #>>42142661 #>>42157364 #
og_kalu ◴[] No.42139257[source]
At this point, AGI means many different things to many different people but OpenAI defines it as "highly autonomous systems that outperform humans in most economically valuable tasks"
replies(1): >>42139793 #
troupo ◴[] No.42139793[source]
This definition suits OpenAI because it lets them claim AGI after reaching an arbitrary goal.

LLMs already outperform humans in a huge variety of tasks. ML in general outperform humans in a large variety of tasks. Are all of them AGI? Doubtful.

replies(4): >>42140183 #>>42140687 #>>42141745 #>>42172995 #
1. snapcaster ◴[] No.42172995[source]
At least it's a testable measurable definition. Everyone else seems to be down boring linguistic rabbit holes or nonstop goal post moving