←back to thread

371 points ulrischa | 1 comments | | HN request time: 0.213s | source
Show context
throwaway314155 ◴[] No.43235006[source]
> The real risk from using LLMs for code is that they’ll make mistakes that aren’t instantly caught by the language compiler or interpreter. And these happen all the time!

Are these not considered hallucinations still?

replies(3): >>43235072 #>>43235140 #>>43237891 #
simonw ◴[] No.43237891[source]
I think of hallucinations as instance where an LLM invents something that is entirely untrue - like a class or method that doesn't exist, or a fact about the world that's unnoticed true.

I guess you could call bugs in LLM code "hallucinations", but they feel like a slightly different thing to me.

replies(1): >>43240004 #
1. throwaway314155 ◴[] No.43240004[source]
That's a great distinction actually. Thanks