←back to thread

416 points floverfelt | 1 comments | | HN request time: 0.244s | source
Show context
sebnukem2 ◴[] No.45056066[source]
> hallucinations aren’t a bug of LLMs, they are a feature. Indeed they are the feature. All an LLM does is produce hallucinations, it’s just that we find some of them useful.

Nice.

replies(7): >>45056284 #>>45056352 #>>45057115 #>>45057234 #>>45057503 #>>45057942 #>>45061686 #
nine_k ◴[] No.45057115[source]
I'd rather say that LLMs live in a world that consists entirely of stories, nothing but words and their combinations. Thy have no other reality. So they are good at generating more stories that would sit well with the stories they already know. But the stories are often imprecise, and sometimes contradictory, so they have to guess. Also, LLMs don't know how to count, but they know that two usually follows one, and three is usually said to be larger than two, so they can speak in a way that mostly does not contradict this knowledge. They can use tools to count, like a human who knows digits would use a calculator.

But much more than an arithmetic engine, the current crop of AI needs an epistemic engine, something that would help follow logic and avoid contradictions, to determine what is a well-established fact, and what is a shaky conjecture. Then we might start trusting the AI.

replies(3): >>45057276 #>>45061104 #>>45061385 #
1. chadcmulligan ◴[] No.45061104[source]
One night, I asked it to write me some stories, it did seem happy doing that. I just kept saying do what you want when it asked me for a choice, its a fun little way to spend a couple of hours.