←back to thread

170 points PaulHoule | 3 comments | | HN request time: 0s | source
Show context
measurablefunc ◴[] No.45120049[source]
There is a formal extensional equivalence between Markov chains & LLMs but the only person who seems to be saying anything about this is Gary Marcus. He is constantly making the point that symbolic understanding can not be reduced to a probabilistic computation regardless of how large the graph gets it will still be missing basic stuff like backtracking (which is available in programming languages like Prolog). I think that Gary is right on basically all counts. Probabilistic generative models are fun but no amount of probabilistic sequence generation can be a substitute for logical reasoning.
replies(16): >>45120249 #>>45120259 #>>45120415 #>>45120573 #>>45120628 #>>45121159 #>>45121215 #>>45122702 #>>45122805 #>>45123808 #>>45123989 #>>45125478 #>>45125935 #>>45129038 #>>45130942 #>>45131644 #
vidarh ◴[] No.45121215[source]
> Probabilistic generative models are fun but no amount of probabilistic sequence generation can be a substitute for logical reasoning.

Unless you either claim that humans can't do logical reasoning, or claim humans exceed the Turing computable, then given you can trivially wire an LLM into a Turing complete system, this reasoning is illogical due to Turing equivalence.

And either of those two claims lack evidence.

replies(4): >>45121263 #>>45122313 #>>45123029 #>>45125727 #
1. godelski ◴[] No.45123029[source]

  > you can trivially wire an LLM into a Turing complete system
Please don't do the "the proof is trivial and left to the reader"[0].

If it is so trivial, show it. Don't hand wave, "put up or shut up". I think if you work this out you'll find it isn't so trivial...

I'm aware of some works but at least every one I know of has limitations that would not apply to LLMs. Plus, none of those are so trivial...

[0] https://en.wikipedia.org/wiki/Proof_by_intimidation

replies(1): >>45156649 #
2. vidarh ◴[] No.45156649[source]
You can do it yourself by setting temperature to zero and asking an LLM to execute the rules of a (2,3) Turing machine.

Since temperature zero makes it deterministic, you only need to test one step for each state and symbol combination.

Are you suggesting you don't believe you can't make a prompt that successfully encodes 6 trivial state transitions?

Either you're being intentionally obtuse, or you don't understand just how simple a minimal Turing machine is.

replies(1): >>45161720 #
3. godelski ◴[] No.45161720[source]

  > Are you suggesting you don't believe you can't make a prompt that successfully encodes 6 trivial state transitions?
Please show it instead of doubling down. It's trivial, right? So it is easier than responding to me. That'll end the conversation right here and now.

Do I think you can modify an LLM to be a Turing Machine, yeah. Of course. But at this point it doesn't seem like we're actually dealing with an LLM anymore. In other comments you're making comparisons to humans, are you suggesting humans are deterministic? If not, well I see a flaw with your proof.