←back to thread

174 points Philpax | 9 comments | | HN request time: 0.414s | source | bottom
Show context
dcchambers ◴[] No.43720006[source]
And in 30 years it will be another 30 years away.

LLMs are so incredibly useful and powerful but they will NEVER be AGI. I actually wonder if the success of (and subsequent obsession with) LLMs is putting true AGI further out of reach. All that these AI companies see are the $$$. When the biggest "AI Research Labs" like OpenAI shifted to product-izing their LLM offerings I think the writing was on the wall that they don't actually care about finding AGI.

replies(3): >>43720042 #>>43720073 #>>43721975 #
1. csours ◴[] No.43720073[source]
People over-estimate the short term and under-estimate the long term.
replies(2): >>43721194 #>>43721364 #
2. barrell ◴[] No.43721194[source]
People overestimate outcomes and underestimate timeframes
3. AstroBen ◴[] No.43721364[source]
Compound growth starting from 0 is... always 0. Current LLMs have 0 general reasoning ability

We haven't even taken the first step towards AGI

replies(3): >>43721635 #>>43722130 #>>43722568 #
4. ◴[] No.43721635[source]
5. csours ◴[] No.43722130[source]
0 and 0.0001 may be difficult to distinguish.
replies(1): >>43722600 #
6. jay_kyburz ◴[] No.43722568[source]
WTF, my calculator is high school was already a step towards AGI.
7. AstroBen ◴[] No.43722600{3}[source]
You need to show evidence of that 0.0001 first otherwise you're going off blind faith
replies(1): >>43722621 #
8. csours ◴[] No.43722621{4}[source]
I didn't make a claim either way.

LLMs may well reach a closed endpoint without getting to AGI - this is my personal current belief - but people are certainly motivated to work on AGI

replies(1): >>43722663 #
9. AstroBen ◴[] No.43722663{5}[source]
Oh for sure. I'm just fighting against the AGI hype. If we survive another 10,000 years I think we'll get there eventually but it's anyone's guess as to when