←back to thread

170 points PaulHoule | 5 comments | | HN request time: 0.001s | source
Show context
measurablefunc ◴[] No.45120049[source]
There is a formal extensional equivalence between Markov chains & LLMs but the only person who seems to be saying anything about this is Gary Marcus. He is constantly making the point that symbolic understanding can not be reduced to a probabilistic computation regardless of how large the graph gets it will still be missing basic stuff like backtracking (which is available in programming languages like Prolog). I think that Gary is right on basically all counts. Probabilistic generative models are fun but no amount of probabilistic sequence generation can be a substitute for logical reasoning.
replies(16): >>45120249 #>>45120259 #>>45120415 #>>45120573 #>>45120628 #>>45121159 #>>45121215 #>>45122702 #>>45122805 #>>45123808 #>>45123989 #>>45125478 #>>45125935 #>>45129038 #>>45130942 #>>45131644 #
1. tim333 ◴[] No.45121159[source]
Humans can do symbolic understanding that seems to rest on a rather flakey probabilistic neural network in our brains, or at least mine does. I can do maths and the like but there's quite a lot of trial and error and double checking things involved.

GPT5 said it thinks it's fixable when I asked it:

>Marcus is right that LLMs alone are not the full story of reasoning. But the evidence so far suggests the gap can be bridged—either by scaling, better architectures, or hybrid neuro-symbolic approaches.

replies(2): >>45122981 #>>45124687 #
2. afiori ◴[] No.45122981[source]
I sorta agree with you, but replying to "LLM can't reason" with "an LLM says they do" is wild
replies(2): >>45124419 #>>45125906 #
3. JohnKemeny ◴[] No.45124419[source]
I asked ChatGPT and it agrees with the statement that it is indeed wild
4. wolvesechoes ◴[] No.45124687[source]
And I though that the gap is bridged by giving another billions to Sam Altman
5. tim333 ◴[] No.45125906[source]
I don't have a strong opinion if LLMs can reason or not. I think they can a bit but not very well. I think that also applies to many humans though. I was stuck that to my eyes GPT5's take on the question seemed better thought out than Garry Marcus's who is pretty biased to the LLMs are rubbish school.