←back to thread

625 points lukebennett | 1 comments | | HN request time: 0.2s | source
Show context
irrational ◴[] No.42139106[source]
> The AGI bubble is bursting a little bit

I'm surprised that any of these companies consider what they are working on to be Artificial General Intelligences. I'm probably wrong, but my impression was AGI meant the AI is self aware like a human. An LLM hardly seems like something that will lead to self-awareness.

replies(18): >>42139138 #>>42139186 #>>42139243 #>>42139257 #>>42139286 #>>42139294 #>>42139338 #>>42139534 #>>42139569 #>>42139633 #>>42139782 #>>42139855 #>>42139950 #>>42139969 #>>42140128 #>>42140234 #>>42142661 #>>42157364 #
nshkrdotcom ◴[] No.42139243[source]
An embodied robot can have a model of self vs. the immediate environment in which it's interacting. Such a robot is arguably sentient.

The "hard problem", to which you may be alluding, may never matter. It's already feasible for an 'AI/AGI with LLM component' to be "self-aware".

replies(2): >>42139268 #>>42139500 #
1. ryanackley ◴[] No.42139500[source]
An internal model of self does not extrapolate to sentience. By your definition, a windows desktop computer is self-aware because it has a device manager. This is literally an internal model of its "self".

We use the term self-awareness as an all encompassing reference of our cognizant nature. It's much more than just having an internal model of self.