←back to thread

174 points Philpax | 1 comments | | HN request time: 0.237s | source
Show context
dcchambers ◴[] No.43720006[source]
And in 30 years it will be another 30 years away.

LLMs are so incredibly useful and powerful but they will NEVER be AGI. I actually wonder if the success of (and subsequent obsession with) LLMs is putting true AGI further out of reach. All that these AI companies see are the $$$. When the biggest "AI Research Labs" like OpenAI shifted to product-izing their LLM offerings I think the writing was on the wall that they don't actually care about finding AGI.

replies(3): >>43720042 #>>43720073 #>>43721975 #
thomasahle ◴[] No.43720042[source]
People will keep improving LLMs, and by the time they are AGI (less than 30 years), you will say, "Well, these are no longer LLMs."
replies(6): >>43720091 #>>43720108 #>>43720115 #>>43720202 #>>43720341 #>>43721154 #
dcchambers ◴[] No.43720202[source]
Will LLMs approach something that appears to be AGI? Maybe. Probably. They're already "better" than humans in many use cases.

LLMs/GPTs are essentially "just" statistical models. At this point the argument becomes more about philosophy than science. What is "intelligence?"

If an LLM can do something truly novel with no human prompting, with no directive other than something it has created for itself - then I guess we can call that intelligence.

replies(2): >>43720232 #>>43722108 #
kadushka ◴[] No.43720232[source]
How many people do you know who are capable of doing something truly novel? Definitely not me, I'm just an average phd doing average research.
replies(3): >>43720255 #>>43720487 #>>43721653 #
dingnuts ◴[] No.43720487[source]
I'm a lowly high school diploma holder. I thought the point of getting a PhD meant you had done something novel (your thesis).

Is that wrong?

replies(2): >>43721200 #>>43721984 #
1. kadushka ◴[] No.43721200[source]
My phd thesis, just like 99% of other phd theses, does not have any “truly novel” ideas.