←back to thread

65 points appwiz | 1 comments | | HN request time: 0.211s | source
Show context
simonw ◴[] No.44383691[source]
I still don't think hallucinations in generated code matter very much. They show up the moment you try to run the code, and with the current batch of "coding agent" systems it's the LLM itself that spots the error when it attempts to run the code.

I was surprised that this paper talked more about RAG solutions than tool-use based solutions. Those seem to me like a proven solution at this point.

replies(4): >>44384474 #>>44384576 #>>44387027 #>>44388124 #
imiric ◴[] No.44384474[source]
I'm surprised to read that from a prominent figure in the industry such as yourself.

The problem is that many hallucinations do not produce a runtime error, and can be very difficult to spot by a human, even if the code is thoroughly reviewed, which in many cases doesn't happen. These can introduce security issues, do completely different things from what the user asked (or didn't ask), do things inefficiently, ignore conventions and language idioms, or just be dead code.

For runtime errors, feeding them back to the LLM, as you say, might fix it. But even in those cases, the produced "fix" can often contain more hallucinations. I don't use agents, but I've often experienced the loop of pasting the error back to the LLM, only to get a confident yet non-working response using hallucinated APIs.

So this problem is not something external tools can solve, and requires a much deeper solution. RAG might be a good initial attempt, but I suspect an architectural solution will be needed to address the root cause. This is important because hallucination is a general problem, and doesn't affect just code generation.

replies(2): >>44384732 #>>44387733 #
lelele ◴[] No.44384732[source]
> The problem is that many hallucinations do not produce a runtime error [...]

Don't hallucinations mean nonexistent things, that is, in the case of code: functions, classes, etc. How could they fail to lead to a runtime error, then? The fact that LLMs can produce unreliable or inefficient code is a different problem, isn't it?

replies(2): >>44384780 #>>44385134 #
1. plausibilitious ◴[] No.44384780[source]
This argument is the reason why LLM output failing to match reality was labelled 'hallucination'. It makes it seem like the LLM only makes mistakes in a neatly verifiable manner.

The 'jpeg of the internet' argument was more apt I think. The output of LLMs might be congruent with reality and how the prompt contents represent reality. But they might also not be, and in subtle ways too.

If only all code that has any flaw in it would not run. That would be truly amazing. Alas, there are several orders of magnitude more sequences of commands that can be run than that should be run.