I feel like if LLMs "knew" when they're out of their depth, they could be much more useful. The question is whether knowing when to stop can be meaningfully learned from examples with RL. From all we've seen the hallucination problem and this stopping problem all boil down to this problem that you could teach the model to say "I don't know" but if that's part of the training dataset it might just spit out "I don't know" to random questions, because it's a likely response in the realm of possible responses, instead of spitting out "I don't know" to not knowing.
SocratesAI is still unsolved, and LLMs are probably not the path to get knowing that you know nothing.
I used to think this, but no longer sure.
Large-scale tasks just grind to a halt with more modern LLMs because of this perception of impassable complexity.
And it's not that they need extensive planning, the LLM knows what needs to be done (it'll even tell you!), it's just more work than will fit within a "session" (arbitrary) and so it would rather refuse than get started.
So you're now looking at TODOs, and hierarchical plans, and all this unnecessary pre-work even when the task scales horizontally very well (if it just jumped into it).
But yes, I assume you mean they abort their loop after a while, which they do.
This whole idea of a "reasoning benchmark" doesn't sit well with me. It seems still not well-defined to me.
Maybe it's just bias I have or my own lack of intelligence, but it seems to me that using language models for "reasoning" is still more or less a gimmick and convenience feature (to automate re-prompts, clarifications etc, as far as possible).
But reading this pop-sci article from summer 2022 seems like this definition problem hasn't changed very much since then.
Although it's about AI progress before ChatGPT and it doesn't even mention the GPT base models. Sure, some of the tasks mentioned in the article seem dated today.
But IMO, there is still no AI model that can be trusted to, for example, accurately summarize a Wikipedia article.
Not all humans can do that either, sure. But humans are better at knowing what they don't know, and deciding what other humans can be trusted. And of course, none of this is an arithmetic or calculation task.
https://www.science.org/content/article/computers-ace-iq-tes...