←back to thread

174 points Philpax | 5 comments | | HN request time: 0.001s | source
Show context
dcchambers ◴[] No.43720006[source]
And in 30 years it will be another 30 years away.

LLMs are so incredibly useful and powerful but they will NEVER be AGI. I actually wonder if the success of (and subsequent obsession with) LLMs is putting true AGI further out of reach. All that these AI companies see are the $$$. When the biggest "AI Research Labs" like OpenAI shifted to product-izing their LLM offerings I think the writing was on the wall that they don't actually care about finding AGI.

replies(3): >>43720042 #>>43720073 #>>43721975 #
thomasahle ◴[] No.43720042[source]
People will keep improving LLMs, and by the time they are AGI (less than 30 years), you will say, "Well, these are no longer LLMs."
replies(6): >>43720091 #>>43720108 #>>43720115 #>>43720202 #>>43720341 #>>43721154 #
1. Spartan-S63 ◴[] No.43720108[source]
What was the point of this comment? It's confrontational and doesn't add anything to the conversation. If you disagree, you could have just said that, or not commented at all.
replies(2): >>43720180 #>>43722080 #
2. logicchains ◴[] No.43720180[source]
The people who go around saying "LLMs aren't intelligent" while refusing to define exactly what they mean by intelligence (and hence not making a meaningful/testable claim) add nothing to the conversation.
replies(2): >>43722013 #>>43723651 #
3. AnimalMuppet ◴[] No.43722013[source]
OK, but the people who go around saying "LLMs are intelligent" are in the same boat...
4. AnimalMuppet ◴[] No.43722080[source]
There's been a complaint for several decades that "AI can never succeed" - because when, say, expert systems are developed from AI research, and they become capable of doing useful things, then the nay-sayer say "That's not AI, that's just expert systems".

This is somewhat defensible, because what the non-AI-researcher means by AI - which may be AGI - is something more than expert systems by themselves can deliver. It is possible that "real AI" will be the combination of multiple approaches, but so far all the reductionist approaches (that expert systems, say, are all that it takes to be an AI) have proven to be inadequate compared to what the expectations are.

The GP may have been riffing off of this "that's not AI" issue that goes way back.

5. cmsj ◴[] No.43723651[source]
I'll happily say that LLMs aren't intelligent, and I'll give you a testable version of it.

An LLM cannot be placed in a simulated universe, with an internally consistent physics system of which it knows nothing, and go from its initial state to a world-spanning civilization that understands and exploits a significant amount of the physics available to it.

I know that is true because if you place an LLM in such a universe, it's just a gigantic matrix of numbers that doesn't do anything. It's no more or less intelligent than the number 3 I just wrote on a piece of paper.

You can go further than that and provide the LLM with the ability to request sensory input from its universe and it's still not intelligent because it won't do that, it will just be a gigantic matrix of numbers that doesn't do anything.

To make it do anything in that universe you would have to provide it with intrinsic motivations and a continuous run loop, but that's not really enough because it's still a static system.

To really bootstrap it into intelligence you'd need to have it start with a very basic set of motivations that it's allowed to modify, and show that it can take that starting condition and grow beyond them.

You will almost immediately run into the problem that LLMs can't learn beyond their context window, because they're not intelligent. Every time they run a "thought" they have to be reminded of every piece of information they previously read/wrote since their training data was fixed in a matrix.

I don't mean to downplay the incredible human achievement of reaching a point in computing where we can take the sum total of human knowledge and process it into a set of probabilities that can regurgitate the most likely response to a given input, but it's not intelligence. Us going from flint tools to semiconductors, vaccines and spaceships, is intelligence. The current architectures of LLMs are fundamentally incapable of that sort of thing. They're a useful substitute for intelligence in a growing number of situations, but they don't fundamentally solve problems, they just produce whatever their matrix determines is the most probable response to a given input.