←back to thread

174 points Philpax | 4 comments | | HN request time: 0s | source
Show context
dcchambers ◴[] No.43720006[source]
And in 30 years it will be another 30 years away.

LLMs are so incredibly useful and powerful but they will NEVER be AGI. I actually wonder if the success of (and subsequent obsession with) LLMs is putting true AGI further out of reach. All that these AI companies see are the $$$. When the biggest "AI Research Labs" like OpenAI shifted to product-izing their LLM offerings I think the writing was on the wall that they don't actually care about finding AGI.

replies(3): >>43720042 #>>43720073 #>>43721975 #
csours ◴[] No.43720073[source]
People over-estimate the short term and under-estimate the long term.
replies(2): >>43721194 #>>43721364 #
AstroBen ◴[] No.43721364[source]
Compound growth starting from 0 is... always 0. Current LLMs have 0 general reasoning ability

We haven't even taken the first step towards AGI

replies(3): >>43721635 #>>43722130 #>>43722568 #
1. csours ◴[] No.43722130{3}[source]
0 and 0.0001 may be difficult to distinguish.
replies(1): >>43722600 #
2. AstroBen ◴[] No.43722600[source]
You need to show evidence of that 0.0001 first otherwise you're going off blind faith
replies(1): >>43722621 #
3. csours ◴[] No.43722621[source]
I didn't make a claim either way.

LLMs may well reach a closed endpoint without getting to AGI - this is my personal current belief - but people are certainly motivated to work on AGI

replies(1): >>43722663 #
4. AstroBen ◴[] No.43722663{3}[source]
Oh for sure. I'm just fighting against the AGI hype. If we survive another 10,000 years I think we'll get there eventually but it's anyone's guess as to when