←back to thread

371 points ulrischa | 2 comments | | HN request time: 0s | source
Show context
ggm ◴[] No.43238015[source]
I'm just here to whine, almost endlessly, that the word "hallucination" is a term of art chosen deliberately because it helps promote a sense AGI exists, by using language which implies reasoning and consciousness. I personally dislike this. I think we were mistaken allowing AI proponents to repurpose language in that way.

It's not hallucinating Jim, it's statistical coding errors. It's floating point rounding mistakes. It's the wrong cell in the excel table.

replies(1): >>43239126 #
rhubarbtree ◴[] No.43239126[source]
“Errors”?
replies(1): >>43239679 #
namaria ◴[] No.43239679[source]
Errors are a category of well understood and explicit failures.

Slop is the best description. LLMs are sloppy tools and some people are not discerning enough to know that blindly running this slop is endangering themselves and others.

replies(1): >>43241766 #
1. rhubarbtree ◴[] No.43241766[source]
I'm not sure errors are really understood that well.

I ask for 2+5, you give me 10. Is that an error?

But then it turns out the user for this program wanted + to be a multiply operator, so the result is "correct".

But then it turns out that another user in the same company wanted it to mean "divide".

It seems to me to be _very_ rare when we can say for sure software contains errors or is error-free, because even at the extreme level of the spec there are just no absolutes.

The generality of "correctness" achieved by a human programmer is caused by generality of intent - they are trying to make the software work as well as possible for its users in all cases.

An LLM has no such intent. It just wants to model language well.

replies(1): >>43247371 #
2. namaria ◴[] No.43247371[source]
LLM output isn't 'error' or 'hallucination' because it can only resemble human language. There is no intent. There is nothing being communicated.

If LLMs output text, that is always the correct output, because it is programmed to extend a given piece of text by outputting tokens that translate to human readable text.

LLMs are only coincidentally correct sometimes when it is given a bit of text to extend and by some clever stopping and waiting for bits of text from a person it can render something that looks like a conversation and it reads like a cogent conversation. That is what they are programmed to do and they do it well.

The text being coherent but failing to conform to reality some way or another is just part of how they work. They are not failing, they are working as intended. They don't hallucinate or produce errors, they are merely sometimes coincidentally correct.

That's what I meant by my comment. Saying that the LLMs 'hallucinate' or 'are wrong about something' is incorrect. They are not producing errors. They are successfully doing what they were programmed to do. LLMs produce sloppy text that is sometimes coincidentally informative.