←back to thread

333 points mooreds | 1 comments | | HN request time: 0.216s | source
Show context
raspasov ◴[] No.44485275[source]
Anyone who claims that a poorly definined concept, AGI, is right around the corner is most likely:

- trying to sell something

- high on their own stories

- high on exogenous compounds

- all of the above

LLMs are good at language. They are OK summarizers of text by design but not good at logic. Very poor at spatial reasoning and as a result poor at connecting concepts together.

Just ask any of the crown jewel LLM models "What's the biggest unsolved problem in the [insert any] field".

The usual result is a pop-science-level article but with ton of subtle yet critical mistakes! Even worse, the answer sounds profound on the surface. In reality, it's just crap.

replies(12): >>44485480 #>>44485483 #>>44485524 #>>44485758 #>>44485846 #>>44485900 #>>44485998 #>>44486105 #>>44486138 #>>44486182 #>>44486682 #>>44493526 #
0x20cowboy ◴[] No.44486682[source]
LLM are a compressed version of their training dataset with a text based interactive search function.
replies(4): >>44486893 #>>44487019 #>>44487057 #>>44488479 #
lexandstuff ◴[] No.44487019[source]
Yes, but you're missing their ability to interpolate across that dataset at retrieval time, which is what makes them extremely useful. Also, people are willing to invest a lot of money to keep building those datasets, until nearly everything of economic value is in there.
replies(2): >>44487043 #>>44488706 #
1. whiteboardr ◴[] No.44488706[source]
Because hypetrain.