←back to thread

343 points mooreds | 1 comments | | HN request time: 0.234s | source
Show context
raspasov ◴[] No.44485275[source]
Anyone who claims that a poorly definined concept, AGI, is right around the corner is most likely:

- trying to sell something

- high on their own stories

- high on exogenous compounds

- all of the above

LLMs are good at language. They are OK summarizers of text by design but not good at logic. Very poor at spatial reasoning and as a result poor at connecting concepts together.

Just ask any of the crown jewel LLM models "What's the biggest unsolved problem in the [insert any] field".

The usual result is a pop-science-level article but with ton of subtle yet critical mistakes! Even worse, the answer sounds profound on the surface. In reality, it's just crap.

replies(12): >>44485480 #>>44485483 #>>44485524 #>>44485758 #>>44485846 #>>44485900 #>>44485998 #>>44486105 #>>44486138 #>>44486182 #>>44486682 #>>44493526 #
0x20cowboy ◴[] No.44486682[source]
LLM are a compressed version of their training dataset with a text based interactive search function.
replies(5): >>44486893 #>>44487019 #>>44487057 #>>44488479 #>>44495075 #
1. sporkland ◴[] No.44495075[source]
yeah I've been thinking about them as stochastic content addressable memory. You can put as many next = userInput; while(true's) { next = mem[next]; } around them as you need in different forms. Single shot. Agents. etc and get wildly cool results out, but it's gated by some of the limitations there.