←back to thread

334 points mooreds | 1 comments | | HN request time: 0.197s | source
Show context
Nition ◴[] No.44484466[source]
Yeah, my suspicion is that current-style LLMs, being inherently predictors of what a human would say, will eventually plateau at a relatively human level of ability to think and reason. Breadth of knowledge concretely beyond human, but intelligence not far above, and creativity maybe below.

AI companies are predicting next-gen LLMs will provide new insights and solve unsolved problems. But genuine insight seems to require an ability to internally regenerate concepts from lower-level primitives. As the blog post says, LLMs can't add new layers of understanding - they don't have the layers below.

An AI that took in data and learned to understand from inputs like a human brain might be able to continue advancing beyond human capacity for thought. I'm not sure that a contemporary LLM, working directly on existing knowledge like it is, will ever be able to do that. Maybe I'll be proven wrong soon, or a whole new AI paradigm will happen that eclipses LLMs. In a way I hope not, because the potential ASI future is pretty scary.

replies(2): >>44486810 #>>44487198 #
1. energy123 ◴[] No.44486810[source]
> current-style LLMs, being inherently predictors of what a human would say

That's no longer what LLMs are. LLMs are now predictors of the tokens that are correlated with the correct answer to math and programming puzzles.