←back to thread

625 points lukebennett | 1 comments | | HN request time: 0.201s | source
Show context
irrational ◴[] No.42139106[source]
> The AGI bubble is bursting a little bit

I'm surprised that any of these companies consider what they are working on to be Artificial General Intelligences. I'm probably wrong, but my impression was AGI meant the AI is self aware like a human. An LLM hardly seems like something that will lead to self-awareness.

replies(18): >>42139138 #>>42139186 #>>42139243 #>>42139257 #>>42139286 #>>42139294 #>>42139338 #>>42139534 #>>42139569 #>>42139633 #>>42139782 #>>42139855 #>>42139950 #>>42139969 #>>42140128 #>>42140234 #>>42142661 #>>42157364 #
nshkrdotcom ◴[] No.42139243[source]
An embodied robot can have a model of self vs. the immediate environment in which it's interacting. Such a robot is arguably sentient.

The "hard problem", to which you may be alluding, may never matter. It's already feasible for an 'AI/AGI with LLM component' to be "self-aware".

replies(2): >>42139268 #>>42139500 #
1. j_maffe ◴[] No.42139268[source]
self-awareness is only one aspect of sentience.