←back to thread

625 points lukebennett | 10 comments | | HN request time: 0.001s | source | bottom
Show context
irrational ◴[] No.42139106[source]
> The AGI bubble is bursting a little bit

I'm surprised that any of these companies consider what they are working on to be Artificial General Intelligences. I'm probably wrong, but my impression was AGI meant the AI is self aware like a human. An LLM hardly seems like something that will lead to self-awareness.

replies(18): >>42139138 #>>42139186 #>>42139243 #>>42139257 #>>42139286 #>>42139294 #>>42139338 #>>42139534 #>>42139569 #>>42139633 #>>42139782 #>>42139855 #>>42139950 #>>42139969 #>>42140128 #>>42140234 #>>42142661 #>>42157364 #
Taylor_OD ◴[] No.42139138[source]
I think your definition is off from what most people would define AGI as. Generally, it means being able to think and reason at a human level for a multitude/all tasks or jobs.

"Artificial General Intelligence (AGI) refers to a theoretical form of artificial intelligence that possesses the ability to understand, learn, and apply knowledge across a wide range of tasks at a level comparable to that of a human being."

Altman says AGI could be here in 2025: https://youtu.be/xXCBz_8hM9w?si=F-vQXJgQvJKZH3fv

But he certainly means an LLM that can perform at/above human level in most tasks rather than a self aware entity.

replies(3): >>42139407 #>>42139669 #>>42139677 #
1. nomel ◴[] No.42139677[source]
> than a self aware entity.

What does this mean? If I have a blind, deaf, paralyzed person, who could only communicate through text, what would the signs be that they were self aware?

Is this more of a feedback loop problem? If I let the LLM run in a loop, and tell it it's talking to itself, would that be approaching "self aware"?

replies(1): >>42140260 #
2. layer8 ◴[] No.42140260[source]
Being aware of its own limitations, for example. Or being aware of how its utterances may come across to its interlocutor.

(And by limitations I don’t mean “sorry, I’m not allowed to help you with this dangerous/contentious topic”.)

replies(3): >>42140889 #>>42141298 #>>42141640 #
3. nuancebydefault ◴[] No.42140889[source]
There is no way of proving awareness in humans let alone machines. We do not even know whether awareness exists or it is just a word that people made up to describe some kind of feeling.
replies(1): >>42142760 #
4. revscat ◴[] No.42141298[source]
Plenty of humans, unfortunately, are incapable of admitting limitations. Many years ago I had a coworker who believed he would never die. At first I thought he was joking, but he was in fact quite serious.

Then there are those who are simply narcissistic, and cannot and will not admit fault regardless of the evidence presented them.

replies(1): >>42142791 #
5. nomel ◴[] No.42141640[source]
> Or being aware of how its utterances may come across to its interlocutor.

I think this behavior is being somewhat demonstrated in newer models. I've seen GPT-3.5 175B correct itself mid response with, almost literally:

> <answer with flaw here>

> Wait, that's not right, that <reason for flaw>.

> <correct answer here>.

Later models seem to have much more awareness of, or "weight" towards, their own responses, while generating the response.

replies(1): >>42142851 #
6. layer8 ◴[] No.42142760{3}[source]
Awareness is exhibited in behavior. It's exactly due to the behavior be observe from LLMs that we don't ascribe them awareness. I agree that it's difficult to define, and it's also not binary, but it's behavior we'd like AI to have and which LLMs are quite lacking.
replies(1): >>42178012 #
7. layer8 ◴[] No.42142791{3}[source]
Being aware and not admitting are two different things, though. When you confront an LLM with a limitation, it will generally admit having it. That doesn't mean that it exhibits any awareness of having the limitation in contexts where the limitation is glaringly relevant, without first having confronted it with it. This is in itself a limitation of LLMs: In contexts where it should be highly obvious, they don't take their limitations into account without specific prompting.
8. layer8 ◴[] No.42142851{3}[source]
I'm assuming the "Wait" sentence is from the user. What I mean is that when humans say something, they also tend to have a view (maybe via the famous mirror neurons) of how this now sounds to the other person. They may catch themselves while speaking, changing course mid-sentence, or adding another sentence to soften or highlight something in the previous sentence, or maybe correcting or admitting some aspect after the fact. LLMs don't exhibit such an inner feedback loop, in which they reconsider the effect of the ouput they are in the process of generating.

You won't get an LLM outputting "wait, that's not right" halfway through their original output (unless you prompted them in a way that would trigger such a speech pattern), because no re-evaluation is taking place without further input.

replies(1): >>42177920 #
9. nomel ◴[] No.42177920{4}[source]
> You won't get an LLM outputting "wait, that's not right" halfway through their original output

No, that's one contiguous response from the LLM. I have screenshots, because I was so surprised the first time. I've had it happen many times. This was (as I always use LLM) direct API calls. In the first case it happened, it was with largest Llama 3.5. It usually only happens one shot, no context, base/empty system prompt.

> LLMs don't exhibit such an inner feedback loop

That's not true, at all. Next token prediction is based on all previous text, including the previous word that was just produced. It uses what it has said for what it will say next, within the same response, just as a markov chain would.

10. ◴[] No.42178012{4}[source]