←back to thread

336 points mooreds | 4 comments | | HN request time: 0.801s | source
Show context
raspasov ◴[] No.44485275[source]
Anyone who claims that a poorly definined concept, AGI, is right around the corner is most likely:

- trying to sell something

- high on their own stories

- high on exogenous compounds

- all of the above

LLMs are good at language. They are OK summarizers of text by design but not good at logic. Very poor at spatial reasoning and as a result poor at connecting concepts together.

Just ask any of the crown jewel LLM models "What's the biggest unsolved problem in the [insert any] field".

The usual result is a pop-science-level article but with ton of subtle yet critical mistakes! Even worse, the answer sounds profound on the surface. In reality, it's just crap.

replies(12): >>44485480 #>>44485483 #>>44485524 #>>44485758 #>>44485846 #>>44485900 #>>44485998 #>>44486105 #>>44486138 #>>44486182 #>>44486682 #>>44493526 #
1. timmg ◴[] No.44485480[source]
Interesting. I think the key to what you wrote is "poorly definined".

I find LLMs to be generally intelligent. So I feel like "we are already there" -- by some definition of AGI. At least how I think of it.

Maybe a lot of people think of AGI as "superhuman". And by that definition, we are not there -- and may not get there.

But, for me, we are already at the era of AGI.

replies(3): >>44485559 #>>44485562 #>>44489492 #
2. Incipient ◴[] No.44485559[source]
I would call them "generally applicable". "intelligence" definitely implies leaning - and I'm not sure RAG, fine-tuning, or 6monthly updates counts - to split hairs.

Where I will say we have a massive gap, which makes the average person not consider it AGI, is in context. I can give a person my very modest codebase, and ask for a change, and they'll deliver - mostly coherently - to that style, files in the right place etc. Still to today with AI, I get inconsistent design, files in random spots, etc.

3. apsurd ◴[] No.44485562[source]
that's the thing about language. we all kinda gotta agree on the meanings
4. weatherlite ◴[] No.44489492[source]
> I find LLMs to be generally intelligent. So I feel like "we are already there" -- by some definition of AGI. At least how I think of it.

I don't disagree - they are useful in many cases and exhibit human like (or better) performance in many tasks. However they cannot simply be a "drop in white collar worker" yet, they are too jagged and unreliable, don't have a real memory etc. Their economic impact is still very much limited. I think this is what many people mean when they say AGI - something with a cognitive performance so good it equals or beats humans in the real world, at their jobs - not at some benchmark.

One could ask - does it matter ? Why can't we say the current tools are great task solvers and call it AGI even if they are bad agents? It's a lengthy discussion to have but I think that ultimately yes, agentic reliability really matters.