What a great way of framing it. I've been trying to explain this to people, but this is a succinct version of what I was stumbling to convey.
What a great way of framing it. I've been trying to explain this to people, but this is a succinct version of what I was stumbling to convey.
That being said, there are methods to train LLMs against hallucinations, and they do improve hallucination-avoidance. But anti-hallucination capabilities are fragile and do not fully generalize. There's no (known) way to train full awareness of its own capabilities into an LLM.
We need to make these models much much better, but it’s going to be quite difficult to reduce the levels to even human levels. And the BS will always be there with us. I suppose BS is the natural side effect of any complex system, artificial or biological, that tries to navigate the problem space of reality and speak on it. These systems, sometimes called “minds”, are going to produce things that sound right but just are not true.
"Critical thinking" and "scientific method" feel quite similar to the "let's think step by step" prompt for the early LLMs. More elaborate directions, compensating for the more subtle flaws of a more capable mind.