←back to thread

336 points mooreds | 1 comments | | HN request time: 2.319s | source
Show context
Nition ◴[] No.44484466[source]
Yeah, my suspicion is that current-style LLMs, being inherently predictors of what a human would say, will eventually plateau at a relatively human level of ability to think and reason. Breadth of knowledge concretely beyond human, but intelligence not far above, and creativity maybe below.

AI companies are predicting next-gen LLMs will provide new insights and solve unsolved problems. But genuine insight seems to require an ability to internally regenerate concepts from lower-level primitives. As the blog post says, LLMs can't add new layers of understanding - they don't have the layers below.

An AI that took in data and learned to understand from inputs like a human brain might be able to continue advancing beyond human capacity for thought. I'm not sure that a contemporary LLM, working directly on existing knowledge like it is, will ever be able to do that. Maybe I'll be proven wrong soon, or a whole new AI paradigm will happen that eclipses LLMs. In a way I hope not, because the potential ASI future is pretty scary.

replies(2): >>44486810 #>>44487198 #
azakai ◴[] No.44487198[source]
> Yeah, my suspicion is that current-style LLMs, being inherently predictors of what a human would say, will eventually plateau at a relatively human level of ability to think and reason.

I don't think things can end there. Machines can be scaled in ways human intelligence can't: if you have a machine that is vaguely of human level intelligence, if you buy a 10x faster GPU, suddenly you have something of vaguely human intelligence but 10x faster.

Speed by itself is going to give it superhuman capabilities, but it isn't just speed. If you can run your system 10 times rather than one, you can have each consider a different approach to the task, then select the best, at least for verifiable tasks.

replies(1): >>44489549 #
1. machiaweliczny ◴[] No.44489549[source]
Good point