←back to thread

170 points PaulHoule | 1 comments | | HN request time: 0.197s | source
Show context
measurablefunc ◴[] No.45120049[source]
There is a formal extensional equivalence between Markov chains & LLMs but the only person who seems to be saying anything about this is Gary Marcus. He is constantly making the point that symbolic understanding can not be reduced to a probabilistic computation regardless of how large the graph gets it will still be missing basic stuff like backtracking (which is available in programming languages like Prolog). I think that Gary is right on basically all counts. Probabilistic generative models are fun but no amount of probabilistic sequence generation can be a substitute for logical reasoning.
replies(16): >>45120249 #>>45120259 #>>45120415 #>>45120573 #>>45120628 #>>45121159 #>>45121215 #>>45122702 #>>45122805 #>>45123808 #>>45123989 #>>45125478 #>>45125935 #>>45129038 #>>45130942 #>>45131644 #
bubblyworld ◴[] No.45123989[source]
If you want to understand SOTA systems then I don't think you should study their formal properties in isolation, i.e. it's not useful to separate them from their environment. Every LLM-based tool has access to code interpreters these days which makes this kind of a moot point.
replies(2): >>45124020 #>>45128356 #
wavemode ◴[] No.45128356[source]
If my cat has access to my computer keyboard, that doesn't make it a software engineer.
replies(1): >>45136293 #
1. bubblyworld ◴[] No.45136293[source]
LLMs can clearly make use of tools, unlike your cat. The claim was that they cannot do backtracking natively, which may or may not be true but it's irrelevant because they can do it through code.

Who said anything about software engineers?