←back to thread

170 points PaulHoule | 1 comments | | HN request time: 0.2s | source
Show context
measurablefunc ◴[] No.45120049[source]
There is a formal extensional equivalence between Markov chains & LLMs but the only person who seems to be saying anything about this is Gary Marcus. He is constantly making the point that symbolic understanding can not be reduced to a probabilistic computation regardless of how large the graph gets it will still be missing basic stuff like backtracking (which is available in programming languages like Prolog). I think that Gary is right on basically all counts. Probabilistic generative models are fun but no amount of probabilistic sequence generation can be a substitute for logical reasoning.
replies(16): >>45120249 #>>45120259 #>>45120415 #>>45120573 #>>45120628 #>>45121159 #>>45121215 #>>45122702 #>>45122805 #>>45123808 #>>45123989 #>>45125478 #>>45125935 #>>45129038 #>>45130942 #>>45131644 #
vidarh ◴[] No.45121215[source]
> Probabilistic generative models are fun but no amount of probabilistic sequence generation can be a substitute for logical reasoning.

Unless you either claim that humans can't do logical reasoning, or claim humans exceed the Turing computable, then given you can trivially wire an LLM into a Turing complete system, this reasoning is illogical due to Turing equivalence.

And either of those two claims lack evidence.

replies(4): >>45121263 #>>45122313 #>>45123029 #>>45125727 #
voidhorse ◴[] No.45125727[source]
Such a system redefines logical reasoning to the point that hardly any typical person's definition would agree.

It's Searle's Chinese Room scenario all over again, which everyone seems to have forgotten amidst the bs marketing storm around LLMs. A person with no knowledge of Chinese following a set of instructions and reading from a dictionary translating texts is a substitute for hiring a translator who understands chinese, however we would not claim that this person understands Chinese.

An LLM hooked up to a Turing Machine would be similar wrt to logical reasoning. When we claim someone reasons logically we usually don't imagine they randomly throw ideas at the wall and then consult outputs to determine if they reasoned logically. Instead, the process of deduction makes the line of reasoning decidedly not stochastic. I can't believe we've gotten to such a mad place that basic notions like that of logical deduction are being confused for stochastic processes. Ultimately, I would agree that it all comes back to the problem of other minds and you either take a fully reductionist stance and claim the brain and intellection is nothing more than probabilistic neural firing or you take a non-reductionist stance and assume there may be more to it. In either case, I think that claiming that LLMs+tools are equivalent to whatever process humans perform is kind of silly and severely underrated what humans are capable of^1.

1: Then again, this has been going on since the dawn of computing, which has always put forth its brain=computer metaphors more on grounds of reducing what we mean by "thought" than by any real substantively justified connection.

replies(3): >>45126037 #>>45129185 #>>45156605 #
SpicyLemonZest ◴[] No.45129185[source]
> When we claim someone reasons logically we usually don't imagine they randomly throw ideas at the wall and then consult outputs to determine if they reasoned logically.

I definitely imagine that and I'm surprised to hear you don't. To me it seems obvious that this is how humans reason logically. When you're developing a complex argument, don't you write a sloppy first draft then review to check and clean up the logic?

replies(1): >>45133135 #
voidhorse ◴[] No.45133135[source]
I think you're mistaking my claim for something else. When I say logical reasoning here, I mean the dead simple reasoning that tells you that 1 + 1 - 1 = 1 or that, by definition, x <= y and y <= x imply x = y. You can reach these conclusions because you understand arithmetic or aspects of order theory and can use the basic definitions of those theories to deduce others. You don't need to throw random guesses at the wall to reach these conclusions or operationally execute an algorithm every time, because you use your understanding and logical reasoning to reach an immediate conclusion, but LLMs precisely don't do this. Maybe you memorize these facts instead of using logic, or maybe you consult Google each time but then I wouldn't claim that you understand arithmetic or order theory either.
replies(1): >>45156627 #
1. vidarh ◴[] No.45156627[source]
LLMs don't "throw random guesses at the wall" in this respect sky more than humans do.