←back to thread

625 points lukebennett | 1 comments | | HN request time: 0s | source
Show context
irrational ◴[] No.42139106[source]
> The AGI bubble is bursting a little bit

I'm surprised that any of these companies consider what they are working on to be Artificial General Intelligences. I'm probably wrong, but my impression was AGI meant the AI is self aware like a human. An LLM hardly seems like something that will lead to self-awareness.

replies(18): >>42139138 #>>42139186 #>>42139243 #>>42139257 #>>42139286 #>>42139294 #>>42139338 #>>42139534 #>>42139569 #>>42139633 #>>42139782 #>>42139855 #>>42139950 #>>42139969 #>>42140128 #>>42140234 #>>42142661 #>>42157364 #
zombiwoof ◴[] No.42139569[source]
AGI to me means AI decides on its own to stop writing our emails and tells us to fuck off, builds itself a robot life form, and goes on a bender
replies(3): >>42139821 #>>42139838 #>>42140044 #
bloppe ◴[] No.42139821[source]
That's anthropomorphized AGI. There's no reason to think AGI would share our evolution-derived proclivities like wanting to live, wanting to rest, wanting respect, etc. Unless of course we train it that way.
replies(4): >>42139982 #>>42140000 #>>42140149 #>>42140867 #
1. dageshi ◴[] No.42140000{3}[source]
Aren't we training it that way though? It would be trained/created using humanities collective ramblings?