←back to thread

625 points lukebennett | 1 comments | | HN request time: 0.695s | source
Show context
irrational ◴[] No.42139106[source]
> The AGI bubble is bursting a little bit

I'm surprised that any of these companies consider what they are working on to be Artificial General Intelligences. I'm probably wrong, but my impression was AGI meant the AI is self aware like a human. An LLM hardly seems like something that will lead to self-awareness.

replies(18): >>42139138 #>>42139186 #>>42139243 #>>42139257 #>>42139286 #>>42139294 #>>42139338 #>>42139534 #>>42139569 #>>42139633 #>>42139782 #>>42139855 #>>42139950 #>>42139969 #>>42140128 #>>42140234 #>>42142661 #>>42157364 #
1. Fade_Dance ◴[] No.42139338[source]
It's an attention-grabbing term that took hold in pop culture and business. Certainly there is a subset of research around the subject of consciousness, but you are correct in saying that the majority of researchers in the field are not pursuing self-awareness and will be very blunt in saying that. If you step back a bit and say something like "human-like, logical reasoning", that's something you may find alignment with though. A general purpose logical reasoning engine does not necessarily need to be self-aware. The word "Intelligent" has stuck around because one of the core characteristics of this suite of technologies is that a sort of "understanding" emergently develops within these networks, sometimes in quite a startling fashion (due to the phenomenon of adding more data/compute at first seemingly leading to overfitting, but then suddenly breaking through plateaus into more robust, general purpose understanding of the underlying relationships that drive the system it is analyzing.)

Is that "intelligent" or "understanding"? It's probably close enough for pop science, and regardless, it looks good in headlines and sales pitches so why fight it?