←back to thread

625 points lukebennett | 2 comments | | HN request time: 0s | source
Show context
irrational ◴[] No.42139106[source]
> The AGI bubble is bursting a little bit

I'm surprised that any of these companies consider what they are working on to be Artificial General Intelligences. I'm probably wrong, but my impression was AGI meant the AI is self aware like a human. An LLM hardly seems like something that will lead to self-awareness.

replies(18): >>42139138 #>>42139186 #>>42139243 #>>42139257 #>>42139286 #>>42139294 #>>42139338 #>>42139534 #>>42139569 #>>42139633 #>>42139782 #>>42139855 #>>42139950 #>>42139969 #>>42140128 #>>42140234 #>>42142661 #>>42157364 #
jedberg ◴[] No.42139186[source]
Whether self awareness is a requirement for AGI definitely gets more into the Philosophy department than the Computer Science department. I'm not sure everyone even agrees on what AGI is, but a common test is "can it do what humans can".

For example, in this article it says it can't do coding exercises outside the training set. That would definitely be on the "AGI checklist". Basically doing anything that is outside of the training set would be on that list.

replies(5): >>42139314 #>>42139671 #>>42139703 #>>42139946 #>>42141257 #
littlestymaar ◴[] No.42139314[source]
> Whether self awareness is a requirement for AGI definitely gets more into the Philosophy department than the Computer Science department.

Depends on how you define “self awareness” but knowing that it doesn't know something instead of hallucinating a plausible-but-wrong is already self awareness of some kind. And it's both highly valuable and beyond current tech's capability.

replies(3): >>42139395 #>>42141680 #>>42141969 #
lagrange77 ◴[] No.42141969[source]
Good point!

I'm wondering wether it would count, if one would extend it with an external program, that gives it feedback during inference (by another prompt) about the correctness of it's output.

I guess it wouldn't, because these RAG tools kind of do that and i heard no one calling those self aware.

replies(1): >>42145102 #
1. littlestymaar ◴[] No.42145102{3}[source]
> if one would extend it with an external program, that gives it feedback

If you have an external program, then by defining it's not self-awareness ;). Also, it's not about correctness per se, but about the model's ability to assess its own knowledge (making a mistake because the model was exposed to mistakes in the training data is fine, hallucinating isn't).

replies(1): >>42150305 #
2. lagrange77 ◴[] No.42150305[source]
Yes, but that's essentially my point. Where to draw the system boundary? The brain is also composed of multiple components and does IO with external components, that are definitely not considered part of it.