←back to thread

170 points PaulHoule | 3 comments | | HN request time: 0s | source
Show context
measurablefunc ◴[] No.45120049[source]
There is a formal extensional equivalence between Markov chains & LLMs but the only person who seems to be saying anything about this is Gary Marcus. He is constantly making the point that symbolic understanding can not be reduced to a probabilistic computation regardless of how large the graph gets it will still be missing basic stuff like backtracking (which is available in programming languages like Prolog). I think that Gary is right on basically all counts. Probabilistic generative models are fun but no amount of probabilistic sequence generation can be a substitute for logical reasoning.
replies(16): >>45120249 #>>45120259 #>>45120415 #>>45120573 #>>45120628 #>>45121159 #>>45121215 #>>45122702 #>>45122805 #>>45123808 #>>45123989 #>>45125478 #>>45125935 #>>45129038 #>>45130942 #>>45131644 #
vidarh ◴[] No.45121215[source]
> Probabilistic generative models are fun but no amount of probabilistic sequence generation can be a substitute for logical reasoning.

Unless you either claim that humans can't do logical reasoning, or claim humans exceed the Turing computable, then given you can trivially wire an LLM into a Turing complete system, this reasoning is illogical due to Turing equivalence.

And either of those two claims lack evidence.

replies(4): >>45121263 #>>45122313 #>>45123029 #>>45125727 #
godelski ◴[] No.45123029[source]

  > you can trivially wire an LLM into a Turing complete system
Please don't do the "the proof is trivial and left to the reader"[0].

If it is so trivial, show it. Don't hand wave, "put up or shut up". I think if you work this out you'll find it isn't so trivial...

I'm aware of some works but at least every one I know of has limitations that would not apply to LLMs. Plus, none of those are so trivial...

[0] https://en.wikipedia.org/wiki/Proof_by_intimidation

replies(1): >>45156649 #
1. vidarh ◴[] No.45156649[source]
You can do it yourself by setting temperature to zero and asking an LLM to execute the rules of a (2,3) Turing machine.

Since temperature zero makes it deterministic, you only need to test one step for each state and symbol combination.

Are you suggesting you don't believe you can't make a prompt that successfully encodes 6 trivial state transitions?

Either you're being intentionally obtuse, or you don't understand just how simple a minimal Turing machine is.

replies(1): >>45161720 #
2. godelski ◴[] No.45161720[source]

  > Are you suggesting you don't believe you can't make a prompt that successfully encodes 6 trivial state transitions?
Please show it instead of doubling down. It's trivial, right? So it is easier than responding to me. That'll end the conversation right here and now.

Do I think you can modify an LLM to be a Turing Machine, yeah. Of course. But at this point it doesn't seem like we're actually dealing with an LLM anymore. In other comments you're making comparisons to humans, are you suggesting humans are deterministic? If not, well I see a flaw with your proof.

replies(1): >>45165905 #
3. vidarh ◴[] No.45165905[source]
I've given an example prompt you can use as a basis in another comment, but let me double down, because it really matters that you seem to think this is a complex problem:

> That'll end the conversation right here and now.

We both know that isn't true, because it is so trivial that if you had any intention of being convinced, you'd have accepted the point already.

Do you genuinely want me to believe that you think an LLM can't act as a simple lookup from 6 keys (3 states, 2 symbols) to 6 tuples?

Because that is all it takes to show that an LLM + a loop can act like a Turing machine given the chance.

If you understand Turing machines, this is obvious. If you don't, even executing the steps personally per the example I gave in another comment is not likely to convince you.

> Do I think you can modify an LLM to be a Turing Machine, yeah. Of course.

There's no need to modify one. This can be done by enclosing an LLM in simple scaffolding, or you can play it out in a chat as long as you can set temperature to 0 (it will work without that as well to an extent, but you can't guarantee that it will keep working)

> But at this point it doesn't seem like we're actually dealing with an LLM anymore.

Are humans no longer human because we can act like a Turing machine?

The point is that anything that is Turing complete is computationally equivalent to anything else that is Turing complete, so demonstrating Turing-completeness is, absent any evidence that it is possible to compute functions outside the Turing computable, sufficient for it to be reasonable to assert equivalence in computational power.

The argument is not that any specific given LLM is capable of reasoning like a human, but to argue that there is no fundamental limit preventing LLMs from reasoning like a human.

> are you suggesting humans are deterministic?

I'm outright claiming we don't know of any mechanism by which we can calculate functions exceeding the Turing computable, nor have we ever seen evidence of it, nor do we know what that would even look like.

If you have any evidence that we can, or any evidence it is even possible - something that'd get you a Nobel Prize if you could show it - then by all means, enlighten us.