> hallucinations aren’t a bug of LLMs, they are a feature. Indeed they are the feature. All an LLM does is produce hallucinations, it’s just that we find some of them useful.
Nice.
replies(7):
Nice.
But much more than an arithmetic engine, the current crop of AI needs an epistemic engine, something that would help follow logic and avoid contradictions, to determine what is a well-established fact, and what is a shaky conjecture. Then we might start trusting the AI.
To me this is the most bizarre part. Have we ever had a technology deployed at this scale without a true understanding of its inner workings?
My fear is that the general public perception of AI will be damaged since for most LLMs = AI.