>If you believe this to be true, you must then also accept that it’s equally irrational to claim these models are actually “reasoning”.
If a novel low probability conclusion that is correct was arrived at from a novel prompt where neither the prompt nor the conclusion existed in the training set, THEN by logic the ONLY possible way the conclusion was derived was through reasoning. We know this, but we don't know HOW the model is reasoning.
The only other possible way that an LLM can arrive at low probability conclusions is via random chance.
>The point of citing the Apple paper was that there’s currently a lack of consensus and in some cases major disagreement about what is actually occurring behind the scenes.
This isn't true. I quote the parent comment:
"What this tells me is there is clearly no “reasoning” happening whatsoever with either model, despite marketing claiming as such."
Parent is clearly saying LLMs can't reason period.
>Everything you’ve written to justify the idea that reasoning is occurring can be used against the idea that reasoning is occurring. This will continue to be true until we gain a better understanding of how these models work.
Right and I took BOTH pieces of contradictory evidence into account and I ended up with the most logical conclusion. I quote myself:
"You have contradictory evidence therefore the LLM must be capable of BOTH failing and succeeding in reason. That's the most logical answer."
>The reason the Apple paper is interesting is because it’s some of the latest writing on this subject, and points at inconvenient truths about the operation of these models that at the very least would indicate that if reasoning is occurring, it’s extremely inconsistent and unreliable.
Right. And this, again, was my conclusion. But I took it a bit further. Read again what I said in the first paragraph of this very response.
>No need to be combative here - aside from being against HN guidelines, there just isn’t enough understanding yet for anyone to be making absolute claims, and the point of my comment was to add counterpoints to a conversation, not make some claim about the absolute nature of things.
You're not combative and neither am I. I respect your analysis here even though you dismissed a lot of what I said (see quotations) and even though I completely disagree and I believe you are wrong.
I think there's a further logical argument you're not realizing and I pointed it out in the first paragraph. LLMs are arriving at novel answers from novel prompts that don't exist in the data set. These novel answers have such low probability of existing via random chance that the ONLY other explanation for it is covered by the broadly defined word: "reasoning".
Again, there is also evidence of prompts that aren't arrived at via reasoning, but that doesn't negate the existence of answers to prompts that can only be arrived via reasoning.