←back to thread

336 points mooreds | 1 comments | | HN request time: 0s | source
Show context
raspasov ◴[] No.44485275[source]
Anyone who claims that a poorly definined concept, AGI, is right around the corner is most likely:

- trying to sell something

- high on their own stories

- high on exogenous compounds

- all of the above

LLMs are good at language. They are OK summarizers of text by design but not good at logic. Very poor at spatial reasoning and as a result poor at connecting concepts together.

Just ask any of the crown jewel LLM models "What's the biggest unsolved problem in the [insert any] field".

The usual result is a pop-science-level article but with ton of subtle yet critical mistakes! Even worse, the answer sounds profound on the surface. In reality, it's just crap.

replies(12): >>44485480 #>>44485483 #>>44485524 #>>44485758 #>>44485846 #>>44485900 #>>44485998 #>>44486105 #>>44486138 #>>44486182 #>>44486682 #>>44493526 #
timmg ◴[] No.44485480[source]
Interesting. I think the key to what you wrote is "poorly definined".

I find LLMs to be generally intelligent. So I feel like "we are already there" -- by some definition of AGI. At least how I think of it.

Maybe a lot of people think of AGI as "superhuman". And by that definition, we are not there -- and may not get there.

But, for me, we are already at the era of AGI.

replies(3): >>44485559 #>>44485562 #>>44489492 #
1. Incipient ◴[] No.44485559[source]
I would call them "generally applicable". "intelligence" definitely implies leaning - and I'm not sure RAG, fine-tuning, or 6monthly updates counts - to split hairs.

Where I will say we have a massive gap, which makes the average person not consider it AGI, is in context. I can give a person my very modest codebase, and ask for a change, and they'll deliver - mostly coherently - to that style, files in the right place etc. Still to today with AI, I get inconsistent design, files in random spots, etc.