←back to thread

170 points PaulHoule | 1 comments | | HN request time: 0s | source
Show context
measurablefunc ◴[] No.45120049[source]
There is a formal extensional equivalence between Markov chains & LLMs but the only person who seems to be saying anything about this is Gary Marcus. He is constantly making the point that symbolic understanding can not be reduced to a probabilistic computation regardless of how large the graph gets it will still be missing basic stuff like backtracking (which is available in programming languages like Prolog). I think that Gary is right on basically all counts. Probabilistic generative models are fun but no amount of probabilistic sequence generation can be a substitute for logical reasoning.
replies(16): >>45120249 #>>45120259 #>>45120415 #>>45120573 #>>45120628 #>>45121159 #>>45121215 #>>45122702 #>>45122805 #>>45123808 #>>45123989 #>>45125478 #>>45125935 #>>45129038 #>>45130942 #>>45131644 #
tim333 ◴[] No.45121159[source]
Humans can do symbolic understanding that seems to rest on a rather flakey probabilistic neural network in our brains, or at least mine does. I can do maths and the like but there's quite a lot of trial and error and double checking things involved.

GPT5 said it thinks it's fixable when I asked it:

>Marcus is right that LLMs alone are not the full story of reasoning. But the evidence so far suggests the gap can be bridged—either by scaling, better architectures, or hybrid neuro-symbolic approaches.

replies(2): >>45122981 #>>45124687 #
1. wolvesechoes ◴[] No.45124687[source]
And I though that the gap is bridged by giving another billions to Sam Altman