←back to thread

371 points ulrischa | 1 comments | | HN request time: 0.209s | source
Show context
objectified ◴[] No.43239000[source]
> The moment you run LLM generated code, any hallucinated methods will be instantly obvious: you’ll get an error. You can fix that yourself or you can feed the error back into the LLM and watch it correct itself.

But that's for methods. For libraries, the scenario is different, and possibly a lot more dangerous. For example, the LLM generates code that imports a library that does not exist. An attacker notices this too while running tests against the LLM. The attacker decides to create these libraries on the public package registry and injects malware. A developer may think: "oh, this newly generated code relies on an external library, I will just install it," and gets owned, possibly without even knowing for a long time (as is the case with many supply chain attacks).

And no, I'm not looking for a way to dismiss the technology, I use LLMs all the time myself. But what I do think is that we might need something like a layer in between the code generation and the user that will catch things like this (or something like Copilot might integrate safety measures against this sort of thing).

replies(1): >>43239345 #
1. namaria ◴[] No.43239345[source]
Prompt injection means that unless people using LLMs to generate code are willing to hunt down and inspect all dependencies, it will become extremely easy to spread malware.