←back to thread

371 points ulrischa | 1 comments | | HN request time: 0.223s | source
Show context
throwaway314155 ◴[] No.43235006[source]
> The real risk from using LLMs for code is that they’ll make mistakes that aren’t instantly caught by the language compiler or interpreter. And these happen all the time!

Are these not considered hallucinations still?

replies(3): >>43235072 #>>43235140 #>>43237891 #
dzaima ◴[] No.43235072[source]
Humans can hallucinate up some API they want to call in the same way that LLMs can, but you don't call all human mistakes hallucinations; classifying everything LLMs do wrong as hallucinations would seem rather pointless to me.
replies(2): >>43235190 #>>43235270 #
1. ForTheKidz ◴[] No.43235190[source]
Maybe we should stop referring to undesired output (confabulation? Bullshit? Making stuff up? Creativity?) as some kind of input delusion. Hallucination is already a meaningful word and this is just gibberish in that context.

As best I can tell, the only reason this term stuck is because early image generation looked super trippy.