←back to thread

65 points appwiz | 4 comments | | HN request time: 0.001s | source
Show context
simonw ◴[] No.44383691[source]
I still don't think hallucinations in generated code matter very much. They show up the moment you try to run the code, and with the current batch of "coding agent" systems it's the LLM itself that spots the error when it attempts to run the code.

I was surprised that this paper talked more about RAG solutions than tool-use based solutions. Those seem to me like a proven solution at this point.

replies(4): >>44384474 #>>44384576 #>>44387027 #>>44388124 #
imiric ◴[] No.44384474[source]
I'm surprised to read that from a prominent figure in the industry such as yourself.

The problem is that many hallucinations do not produce a runtime error, and can be very difficult to spot by a human, even if the code is thoroughly reviewed, which in many cases doesn't happen. These can introduce security issues, do completely different things from what the user asked (or didn't ask), do things inefficiently, ignore conventions and language idioms, or just be dead code.

For runtime errors, feeding them back to the LLM, as you say, might fix it. But even in those cases, the produced "fix" can often contain more hallucinations. I don't use agents, but I've often experienced the loop of pasting the error back to the LLM, only to get a confident yet non-working response using hallucinated APIs.

So this problem is not something external tools can solve, and requires a much deeper solution. RAG might be a good initial attempt, but I suspect an architectural solution will be needed to address the root cause. This is important because hallucination is a general problem, and doesn't affect just code generation.

replies(2): >>44384732 #>>44387733 #
1. simonw ◴[] No.44387733[source]
If you define "hallucinations" to mean "any mistakes at all" then yes, a compiler won't catch them for you.

I define hallucinations as a a particular class of mistakes where the LLM invents eg a function or method that does not exist. Those are solved by ensuring the code runs. I wrote more about that here: https://simonwillison.net/2025/Mar/2/hallucinations-in-code/

Even beyond that more narrow definition of a hallucination, tool use is relevant to general mistakes made by an LLM. The new Phoenix.new coding agent actively tests the web applications it is writing using a headless browser, for example: https://simonwillison.net/2025/Jun/23/phoenix-new/

The more tools like this come into play, the less concern I have about the big black box of matrices occasionally hallucinating up some code that is broken in obvious or subtle ways.

It's still on us as the end users to confirm that the code written for us actually does the job we set out to solve. I'm fine with that too.

replies(2): >>44388994 #>>44390551 #
2. HarHarVeryFunny ◴[] No.44388994[source]
I think the more general/useful definition of "hallucination" is anytime the LLM predicts next word based on "least worst" (statistically) choice rather than based on any closely matching samples in the training data.

The LLM has to generate some word each time it is called, and unless it recognizes soon enough that "I don't know" is the best answer (in of itself problematic, since any such prediction would be based on the training data, not the LLM's own aggregate knowledge!), then it may back itself into a corner where it has no well-grounded continuation, but nonetheless has to spit out the statistically best prediction, even if that is a very bad ungrounded prediction such as a non-existent API, "fits the profile" concocted answer, or anything else ...

Of course the LLM's output builds on itself, so any ungrounded/hallucinated output doesn't need to be limited to a single word or API call, but may instead consist of a whole "just trying my best" sentence or chunk of code (better hope you have unit test code coverage to test/catch it).

3. imiric ◴[] No.44390551[source]
> If you define "hallucinations" to mean "any mistakes at all" then yes, a compiler won't catch them for you.

That's not quite my definition. If we're judging these tools by the same criteria we use to judge human programmers, then mistakes and bugs should be acceptable. I'm fine with this to a certain extent, even though these tools are being marketed as having superhuman abilities. But the problem is that LLMs create an entirely unique class of issues that most humans don't. Using nonexistent APIs is just one symptom of it. Like I mentioned in the comment below, they might hallucinate requirements that were never specified, or fixes for bugs that don't exist, all the while producing code that compiles and runs without errors.

But let's assume that we narrow down the definition of hallucination to usage of nonexistent APIs. Your proposed solution is to feed the error back to the LLM. Great, but can you guarantee that the proposed fix will also not contain hallucinations? As I also mentioned, in most occasions when I've done this the LLM simply produces more hallucinated code, and I get stuck in a neverending loop where the only solution is for me to dig into the code and fix the issue myself. So the LLM simply wastes my time in these cases.

> The new Phoenix.new coding agent actively tests the web applications it is writing using a headless browser

That's great, but can you trust that it will cover all real world usage scenarios, test edge cases and failure scenarios, and do so accurately? Tests are code as well, and it can have the same issues as application code.

I'm sure that we can continue to make these tools more useful by working around these issues and using better adjacent tooling as mitigation. But the fundamental problem of hallucinations still needs to be solved. Mainly because it affects tasks other than code generation, where it's much more difficult to deal with.

replies(1): >>44392516 #
4. simonw ◴[] No.44392516[source]
> Your proposed solution is to feed the error back to the LLM. Great, but can you guarantee that the proposed fix will also not contain hallucinations?

You do it in a loop. Keep looping and fixing until the code runs.

> but can you trust that it will cover all real world usage scenarios, test edge cases and failure scenarios, and do so accurately?

Absolutely not. Most of my blog entry about why code hallucinations aren't as dangerous as other mistakes talks about that as being the real problem humans need to solve when using LLMs to write code: https://simonwillison.net/2025/Mar/2/hallucinations-in-code/...

From the start of that article:

> The real risk from using LLMs for code is that they’ll make mistakes that aren’t instantly caught by the language compiler or interpreter. And these happen all the time!