←back to thread

625 points lukebennett | 1 comments | | HN request time: 0s | source
Show context
irrational ◴[] No.42139106[source]
> The AGI bubble is bursting a little bit

I'm surprised that any of these companies consider what they are working on to be Artificial General Intelligences. I'm probably wrong, but my impression was AGI meant the AI is self aware like a human. An LLM hardly seems like something that will lead to self-awareness.

replies(18): >>42139138 #>>42139186 #>>42139243 #>>42139257 #>>42139286 #>>42139294 #>>42139338 #>>42139534 #>>42139569 #>>42139633 #>>42139782 #>>42139855 #>>42139950 #>>42139969 #>>42140128 #>>42140234 #>>42142661 #>>42157364 #
jedberg ◴[] No.42139186[source]
Whether self awareness is a requirement for AGI definitely gets more into the Philosophy department than the Computer Science department. I'm not sure everyone even agrees on what AGI is, but a common test is "can it do what humans can".

For example, in this article it says it can't do coding exercises outside the training set. That would definitely be on the "AGI checklist". Basically doing anything that is outside of the training set would be on that list.

replies(5): >>42139314 #>>42139671 #>>42139703 #>>42139946 #>>42141257 #
Filligree ◴[] No.42139671[source]
Let me modify that a little, because humans can't do things outside their training set either.

A crucial element of AGI would be the ability to self-train on self-generated data, online. So it's not really AGI if there is a hard distinction between training and inference (though it may still be very capable), and it's not really AGI if it can't work its way through novel problems on its own.

The ability to immediately solve a problem it's never seen before is too high a bar, I think.

And yes, my definition still excludes a lot of humans in a lot of fields. That's a bullet I'm willing to bite.

replies(2): >>42140011 #>>42140807 #
lxgr ◴[] No.42140011[source]
Are you arguing that writing, doing math, going to the moon etc. were all in the "original training set" of humans in some way?
replies(1): >>42140169 #
layer8 ◴[] No.42140169{3}[source]
Not in the original training set (GP is saying), but the necessary skills became part of the training set over time. In other words, human are fine with the training set being a changing moving target, whereas ML models are to a significant extent “stuck” with their original training set.

(That’s not to say that humans don’t tend to lose some of their flexibility over their individual lifetimes as well.)

replies(1): >>42143746 #
1. Jensson ◴[] No.42143746{4}[source]
> (That’s not to say that humans don’t tend to lose some of their flexibility over their individual lifetimes as well.)

The lifetime is the context window, the model/training is the DNA. A human in the moment isn't general intelligent, but a human over his lifetime is, the first is so much easier to try to replicate though but that is a bad target since humans aren't born like that.