When I prompt an RLM, I can see it spits out reasoning steps. But I don't find that evidence RLMs are capable of reasoning.
When I prompt an RLM, I can see it spits out reasoning steps. But I don't find that evidence RLMs are capable of reasoning.
There's no evidence to be had when we only know the inputs and outputs of a black box.
Imo the paper itself should have touched on the lack of paper discussing what's in the blackbox that makes them Reasoning LMs. It does mention some tree algorithm supposedly key to reasoning capabilities.
By no means attacking the paper as its intent is to demonstrate the lack of success to even solve simple to formulate, complex puzzles.
I was not making a point, I was genuinely asking in case someone knows of papers I could read on that make claims with evidence that's those RLM actually reason, and how.
Pattern matching is a component of reason. Not === reason.