←back to thread

336 points mooreds | 1 comments | | HN request time: 0s | source
Show context
raspasov ◴[] No.44485275[source]
Anyone who claims that a poorly definined concept, AGI, is right around the corner is most likely:

- trying to sell something

- high on their own stories

- high on exogenous compounds

- all of the above

LLMs are good at language. They are OK summarizers of text by design but not good at logic. Very poor at spatial reasoning and as a result poor at connecting concepts together.

Just ask any of the crown jewel LLM models "What's the biggest unsolved problem in the [insert any] field".

The usual result is a pop-science-level article but with ton of subtle yet critical mistakes! Even worse, the answer sounds profound on the surface. In reality, it's just crap.

replies(12): >>44485480 #>>44485483 #>>44485524 #>>44485758 #>>44485846 #>>44485900 #>>44485998 #>>44486105 #>>44486138 #>>44486182 #>>44486682 #>>44493526 #
0x20cowboy ◴[] No.44486682[source]
LLM are a compressed version of their training dataset with a text based interactive search function.
replies(4): >>44486893 #>>44487019 #>>44487057 #>>44488479 #
1. bdelmas ◴[] No.44488479[source]
Exactly I am so tired to hear about AI… And they are not even AI! I am also losing faith in this field when I see how much they all push so much hype and lies like this instead of being transparent. They are not AGIs not even AIs… For now they are only models and your definition is a good one