←back to thread

371 points ulrischa | 2 comments | | HN request time: 0.507s | source
Show context
throwaway314155 ◴[] No.43235006[source]
> The real risk from using LLMs for code is that they’ll make mistakes that aren’t instantly caught by the language compiler or interpreter. And these happen all the time!

Are these not considered hallucinations still?

replies(3): >>43235072 #>>43235140 #>>43237891 #
dzaima ◴[] No.43235072[source]
Humans can hallucinate up some API they want to call in the same way that LLMs can, but you don't call all human mistakes hallucinations; classifying everything LLMs do wrong as hallucinations would seem rather pointless to me.
replies(2): >>43235190 #>>43235270 #
1. thylacine222 ◴[] No.43235270[source]
Analogizing this to human hallucination is silly. In the instance you're talking about, the human isn't hallucinating, they're lying.
replies(1): >>43235333 #
2. dzaima ◴[] No.43235333[source]
I definitely wouldn't say I'm lying (...to.. myself? what? or perhaps others for a quick untested response in a chatroom or something) whenever I write some code and it turns out that I misremembered the name of an API. "Hallucination" for that might be over-dramatic but at least it it's a somewhat sensible description.