←back to thread

625 points lukebennett | 5 comments | | HN request time: 0.686s | source
Show context
irrational ◴[] No.42139106[source]
> The AGI bubble is bursting a little bit

I'm surprised that any of these companies consider what they are working on to be Artificial General Intelligences. I'm probably wrong, but my impression was AGI meant the AI is self aware like a human. An LLM hardly seems like something that will lead to self-awareness.

replies(18): >>42139138 #>>42139186 #>>42139243 #>>42139257 #>>42139286 #>>42139294 #>>42139338 #>>42139534 #>>42139569 #>>42139633 #>>42139782 #>>42139855 #>>42139950 #>>42139969 #>>42140128 #>>42140234 #>>42142661 #>>42157364 #
zombiwoof ◴[] No.42139569[source]
AGI to me means AI decides on its own to stop writing our emails and tells us to fuck off, builds itself a robot life form, and goes on a bender
replies(3): >>42139821 #>>42139838 #>>42140044 #
1. bloppe ◴[] No.42139821[source]
That's anthropomorphized AGI. There's no reason to think AGI would share our evolution-derived proclivities like wanting to live, wanting to rest, wanting respect, etc. Unless of course we train it that way.
replies(4): >>42139982 #>>42140000 #>>42140149 #>>42140867 #
2. logicchains ◴[] No.42139982[source]
If it had any goals at all it'd share the desire to live, because living is a prerequisite to achieving almost any goal.
3. dageshi ◴[] No.42140000[source]
Aren't we training it that way though? It would be trained/created using humanities collective ramblings?
4. HarHarVeryFunny ◴[] No.42140149[source]
It's not a matter of training but design (or in our case evolution). We don't want to live, but rather want to avoid things that we've evolved to find unpleasant such as pain, hunger, thirst, and maximize things we've evolved to find pleasurable like sex.

A future of people interacting with humanoid robots seems like cheesy sci-fi dream, same as a future of people flitting about in flying cars. However, if we really did want to create robots like this that took care not to damage themselves, and could empathize with human emotions, then we'd need to build a lot of this in, the same way that it's built into ourselves.

5. ◴[] No.42140867[source]