←back to thread

174 points Philpax | 1 comments | | HN request time: 0s | source
Show context
dcchambers ◴[] No.43720006[source]
And in 30 years it will be another 30 years away.

LLMs are so incredibly useful and powerful but they will NEVER be AGI. I actually wonder if the success of (and subsequent obsession with) LLMs is putting true AGI further out of reach. All that these AI companies see are the $$$. When the biggest "AI Research Labs" like OpenAI shifted to product-izing their LLM offerings I think the writing was on the wall that they don't actually care about finding AGI.

replies(3): >>43720042 #>>43720073 #>>43721975 #
thomasahle ◴[] No.43720042[source]
People will keep improving LLMs, and by the time they are AGI (less than 30 years), you will say, "Well, these are no longer LLMs."
replies(6): >>43720091 #>>43720108 #>>43720115 #>>43720202 #>>43720341 #>>43721154 #
dcchambers ◴[] No.43720202[source]
Will LLMs approach something that appears to be AGI? Maybe. Probably. They're already "better" than humans in many use cases.

LLMs/GPTs are essentially "just" statistical models. At this point the argument becomes more about philosophy than science. What is "intelligence?"

If an LLM can do something truly novel with no human prompting, with no directive other than something it has created for itself - then I guess we can call that intelligence.

replies(2): >>43720232 #>>43722108 #
1. yibg ◴[] No.43722108{3}[source]
Isn't the human brain also "just" a big statistical model as far as we know? (very loosely speaking)