←back to thread

625 points lukebennett | 1 comments | | HN request time: 0.249s | source
Show context
irrational ◴[] No.42139106[source]
> The AGI bubble is bursting a little bit

I'm surprised that any of these companies consider what they are working on to be Artificial General Intelligences. I'm probably wrong, but my impression was AGI meant the AI is self aware like a human. An LLM hardly seems like something that will lead to self-awareness.

replies(18): >>42139138 #>>42139186 #>>42139243 #>>42139257 #>>42139286 #>>42139294 #>>42139338 #>>42139534 #>>42139569 #>>42139633 #>>42139782 #>>42139855 #>>42139950 #>>42139969 #>>42140128 #>>42140234 #>>42142661 #>>42157364 #
Taylor_OD ◴[] No.42139138[source]
I think your definition is off from what most people would define AGI as. Generally, it means being able to think and reason at a human level for a multitude/all tasks or jobs.

"Artificial General Intelligence (AGI) refers to a theoretical form of artificial intelligence that possesses the ability to understand, learn, and apply knowledge across a wide range of tasks at a level comparable to that of a human being."

Altman says AGI could be here in 2025: https://youtu.be/xXCBz_8hM9w?si=F-vQXJgQvJKZH3fv

But he certainly means an LLM that can perform at/above human level in most tasks rather than a self aware entity.

replies(3): >>42139407 #>>42139669 #>>42139677 #
nomel ◴[] No.42139677[source]
> than a self aware entity.

What does this mean? If I have a blind, deaf, paralyzed person, who could only communicate through text, what would the signs be that they were self aware?

Is this more of a feedback loop problem? If I let the LLM run in a loop, and tell it it's talking to itself, would that be approaching "self aware"?

replies(1): >>42140260 #
layer8 ◴[] No.42140260[source]
Being aware of its own limitations, for example. Or being aware of how its utterances may come across to its interlocutor.

(And by limitations I don’t mean “sorry, I’m not allowed to help you with this dangerous/contentious topic”.)

replies(3): >>42140889 #>>42141298 #>>42141640 #
nuancebydefault ◴[] No.42140889[source]
There is no way of proving awareness in humans let alone machines. We do not even know whether awareness exists or it is just a word that people made up to describe some kind of feeling.
replies(1): >>42142760 #
layer8 ◴[] No.42142760[source]
Awareness is exhibited in behavior. It's exactly due to the behavior be observe from LLMs that we don't ascribe them awareness. I agree that it's difficult to define, and it's also not binary, but it's behavior we'd like AI to have and which LLMs are quite lacking.
replies(1): >>42178012 #
1. ◴[] No.42178012[source]