←back to thread

336 points mooreds | 5 comments | | HN request time: 2.474s | source
Show context
raspasov ◴[] No.44485275[source]
Anyone who claims that a poorly definined concept, AGI, is right around the corner is most likely:

- trying to sell something

- high on their own stories

- high on exogenous compounds

- all of the above

LLMs are good at language. They are OK summarizers of text by design but not good at logic. Very poor at spatial reasoning and as a result poor at connecting concepts together.

Just ask any of the crown jewel LLM models "What's the biggest unsolved problem in the [insert any] field".

The usual result is a pop-science-level article but with ton of subtle yet critical mistakes! Even worse, the answer sounds profound on the surface. In reality, it's just crap.

replies(12): >>44485480 #>>44485483 #>>44485524 #>>44485758 #>>44485846 #>>44485900 #>>44485998 #>>44486105 #>>44486138 #>>44486182 #>>44486682 #>>44493526 #
0x20cowboy ◴[] No.44486682[source]
LLM are a compressed version of their training dataset with a text based interactive search function.
replies(4): >>44486893 #>>44487019 #>>44487057 #>>44488479 #
Salgat ◴[] No.44486893[source]
LLMs require the sum total of human knowledge to ape what you can find on google, meanwhile Ramanujan achieved brilliant discoveries in mathematics using nothing but a grade school education and a few math books.
replies(1): >>44487060 #
rowanG077 ◴[] No.44487060[source]
You phrase it as a diss but "Yeah LLM suck, they aren't even as smart as Ramanujan" sounds like a high praise to me.
replies(1): >>44487389 #
1. Salgat ◴[] No.44487389[source]
Unfortunately LLMs fail even basic logic tests given to children so definitely not high praise. I'm just highlighting the absurd amount of data they need versus humans to highlight that they're just spitting out regressions on the training data. We're talking data that would take a human countless thousands of lifetimes to ingest. Yet a human can accomplish infinitely more with a basic grade school education.
replies(1): >>44487984 #
2. jbstack ◴[] No.44487984[source]
Humans can achieve more within one (or two, or a few) narrowly scoped field(s), after a lot of hard work and effort. LLMs can display a basic level of competency (with some mistakes) in almost any topic known to mankind. No one reasonably expects a LLM to be able to do the former, and humans certainly cannot do the latter.

You're comparing apples and oranges.

Also, your comparison is unfair. You've chosen an exceptional high achiever as your example of a human to compare against LLMs. If you instead compare the average human, LLMs don't look so bad even when the human has the advantage of specialisation (e.g. medical diagnostics). A LLM can do reasonably well against an average (not exceptional) person with just a basic grade school education if asked to produce an essay on some topic.

replies(2): >>44488162 #>>44493759 #
3. mhuffman ◴[] No.44488162[source]
>Humans can achieve more within one (or two, or a few) narrowly scoped field(s), after a lot of hard work and effort.

>No one reasonably expects a LLM to be able to do the former

I can feel Sam Altman's rage building ...

replies(1): >>44489340 #
4. weatherlite ◴[] No.44489340{3}[source]
Yeah, I think many investors do expect that ...
5. Salgat ◴[] No.44493759[source]
With Google I can demonstrate a wide breadth of knowledge too. LLM's aren't unique in that aspect.