←back to thread

174 points Philpax | 1 comments | | HN request time: 0s | source
Show context
dcchambers ◴[] No.43720006[source]
And in 30 years it will be another 30 years away.

LLMs are so incredibly useful and powerful but they will NEVER be AGI. I actually wonder if the success of (and subsequent obsession with) LLMs is putting true AGI further out of reach. All that these AI companies see are the $$$. When the biggest "AI Research Labs" like OpenAI shifted to product-izing their LLM offerings I think the writing was on the wall that they don't actually care about finding AGI.

replies(3): >>43720042 #>>43720073 #>>43721975 #
thomasahle ◴[] No.43720042[source]
People will keep improving LLMs, and by the time they are AGI (less than 30 years), you will say, "Well, these are no longer LLMs."
replies(6): >>43720091 #>>43720108 #>>43720115 #>>43720202 #>>43720341 #>>43721154 #
Spartan-S63 ◴[] No.43720108[source]
What was the point of this comment? It's confrontational and doesn't add anything to the conversation. If you disagree, you could have just said that, or not commented at all.
replies(2): >>43720180 #>>43722080 #
1. AnimalMuppet ◴[] No.43722080{3}[source]
There's been a complaint for several decades that "AI can never succeed" - because when, say, expert systems are developed from AI research, and they become capable of doing useful things, then the nay-sayer say "That's not AI, that's just expert systems".

This is somewhat defensible, because what the non-AI-researcher means by AI - which may be AGI - is something more than expert systems by themselves can deliver. It is possible that "real AI" will be the combination of multiple approaches, but so far all the reductionist approaches (that expert systems, say, are all that it takes to be an AI) have proven to be inadequate compared to what the expectations are.

The GP may have been riffing off of this "that's not AI" issue that goes way back.