←back to thread

214 points optimalsolver | 1 comments | | HN request time: 0s | source
Show context
hirako2000 ◴[] No.45770845[source]
Has any one ever found an ML/AI paper that make claims that RLMs can reason?

When I prompt an RLM, I can see it spits out reasoning steps. But I don't find that evidence RLMs are capable of reasoning.

replies(3): >>45770918 #>>45770977 #>>45771339 #
tempfile ◴[] No.45771339[source]
I don't understand what point you are making. Doesn't the name "Reasoning language models" claim that they can reason? Why do you want to see it explicitly written down in a paper?
replies(2): >>45771590 #>>45774621 #
1. hirako2000 ◴[] No.45771590[source]
This very paper sits on the assumption reasoning (to solve puzzles) is at play. It calls those LLMs RLMs.

Imo the paper itself should have touched on the lack of paper discussing what's in the blackbox that makes them Reasoning LMs. It does mention some tree algorithm supposedly key to reasoning capabilities.

By no means attacking the paper as its intent is to demonstrate the lack of success to even solve simple to formulate, complex puzzles.

I was not making a point, I was genuinely asking in case someone knows of papers I could read on that make claims with evidence that's those RLM actually reason, and how.