Unless you either claim that humans can't do logical reasoning, or claim humans exceed the Turing computable, then given you can trivially wire an LLM into a Turing complete system, this reasoning is illogical due to Turing equivalence.
And either of those two claims lack evidence.
It's Searle's Chinese Room scenario all over again, which everyone seems to have forgotten amidst the bs marketing storm around LLMs. A person with no knowledge of Chinese following a set of instructions and reading from a dictionary translating texts is a substitute for hiring a translator who understands chinese, however we would not claim that this person understands Chinese.
An LLM hooked up to a Turing Machine would be similar wrt to logical reasoning. When we claim someone reasons logically we usually don't imagine they randomly throw ideas at the wall and then consult outputs to determine if they reasoned logically. Instead, the process of deduction makes the line of reasoning decidedly not stochastic. I can't believe we've gotten to such a mad place that basic notions like that of logical deduction are being confused for stochastic processes. Ultimately, I would agree that it all comes back to the problem of other minds and you either take a fully reductionist stance and claim the brain and intellection is nothing more than probabilistic neural firing or you take a non-reductionist stance and assume there may be more to it. In either case, I think that claiming that LLMs+tools are equivalent to whatever process humans perform is kind of silly and severely underrated what humans are capable of^1.
1: Then again, this has been going on since the dawn of computing, which has always put forth its brain=computer metaphors more on grounds of reducing what we mean by "thought" than by any real substantively justified connection.
And you failed to understand my argument. You are a Turing machine. I am a Turing machine. The LLM in a loop is a Turing machine.
Unless you can show evidence that unlike the LLMs* we can execute more than the Turing computable, the theoretical limits on our reasoning are exactly the same as that of the LLM.
Absent any evidence at all that we can solve anything outside of the Turing computable, or that any computable function exists outside the Turing computable, the burden of proof is firmly in those making such an outrageous assumption to produce at least a single example of such a computation.
This argumebt doesn't mean any given LLM is capable of reasoning at the level of a human on its own any more than it means a given person is able to translate Chinese on its own, but it does mean there's no basis in any evidence for claiming no LLM can be made to reason just like a human any more than there's a basis for claiming no person can learn Chinese.
> When we claim someone reasons logically we usually don't imagine they randomly throw ideas at the wall and then consult outputs to determine if they reasoned logically
This isn't how LLMs work either, so this is entirely irrelevant.