←back to thread

625 points lukebennett | 3 comments | | HN request time: 0s | source
Show context
irrational ◴[] No.42139106[source]
> The AGI bubble is bursting a little bit

I'm surprised that any of these companies consider what they are working on to be Artificial General Intelligences. I'm probably wrong, but my impression was AGI meant the AI is self aware like a human. An LLM hardly seems like something that will lead to self-awareness.

replies(18): >>42139138 #>>42139186 #>>42139243 #>>42139257 #>>42139286 #>>42139294 #>>42139338 #>>42139534 #>>42139569 #>>42139633 #>>42139782 #>>42139855 #>>42139950 #>>42139969 #>>42140128 #>>42140234 #>>42142661 #>>42157364 #
jedberg ◴[] No.42139186[source]
Whether self awareness is a requirement for AGI definitely gets more into the Philosophy department than the Computer Science department. I'm not sure everyone even agrees on what AGI is, but a common test is "can it do what humans can".

For example, in this article it says it can't do coding exercises outside the training set. That would definitely be on the "AGI checklist". Basically doing anything that is outside of the training set would be on that list.

replies(5): >>42139314 #>>42139671 #>>42139703 #>>42139946 #>>42141257 #
norir ◴[] No.42139703[source]
Here is an example of a task that I do not believe this generation of LLMs can ever do but that is possible for a human: design a Turing complete programming language that is both human and machine readable and implement a self hosted compiler in this language that self compiles on existing hardware faster than any known language implementation that also self compiles. Additionally, for any syntactically or semantically invalid program, the compiler must provide an error message that points exactly to the source location of the first error that occurs in the program.

I will get excited for/scared of LLMs when they can tackle this kind of problem. But I don't believe they can because of the fundamental nature of their design, which is both backward looking (thus not better than the human state of the art) and lacks human intuition and self awareness. Or perhaps rather I believe that the prompt that would be required to get an LLM to produce such a program is a problem of at least equivalent complexity to implementing the program without an LLM.

replies(4): >>42140363 #>>42141652 #>>42141654 #>>42145267 #
1. Vampiero ◴[] No.42145267[source]
Here is an example of a task that I do not believe this generation of LLMs can ever do but that is possible for an average human: designing a functional trivia app.

There, you don't need to invoke Turing or compiler bootstrapping. You just need one example of a use case where the accuracy of responses is mission critical

replies(1): >>42146128 #
2. alainx277 ◴[] No.42146128[source]
o1-preview managed to complete this in one attempt:

https://chatgpt.com/share/67373737-04a8-800d-bc57-de74a415e2...

I think the parent comment's challenge is more appropriate.

replies(1): >>42148745 #
3. Vampiero ◴[] No.42148745[source]
Have you personally verified that the answers are not hallucinations and that they are indeed factually true?

Oh, you just asked it to make a trivia app that feeds on JSON. Cute, but that's not what I meant. The web is full of tutorials for basic stuff like that.

To be clear I meant that LLMs can't write trivia questions and answers, thus proving that they can't produce trustworthy outputs.

And a trivia app is a toy (one might even say... a trivial example), but it's a useful demonstration of why you wouldn't put an LLM into a system on which lives depend on, let alone invest billions on it.

If you don't trust my words just go back to fiddling with your models and ask them to write a trivia quiz about a topic that you know very well. Like a TV show.