←back to thread

371 points ulrischa | 1 comments | | HN request time: 1.956s | source
Show context
throwaway314155 ◴[] No.43235006[source]
> The real risk from using LLMs for code is that they’ll make mistakes that aren’t instantly caught by the language compiler or interpreter. And these happen all the time!

Are these not considered hallucinations still?

replies(3): >>43235072 #>>43235140 #>>43237891 #
1. fweimer ◴[] No.43235140[source]
I don't think it's necessarily a hallucination if models accurately reproduce the code quality of their training data.