←back to thread

146 points jakozaur | 1 comments | | HN request time: 0.252s | source
Show context
simonw ◴[] No.45670650[source]
If you can get malicious instructions into the context of even the most powerful reasoning LLMs in the world you'll still be able to trick them into outputting vulnerable code like this if you try hard enough.

I don't think the fact that small models are easier to trick is particularly interesting from a security perspective, because you need to assume that ANY model can be prompt injected by a suitably motivated attacker.

On that basis I agree with the article that we need to be using additional layers of protection that work against compromised models, such as robust sandboxed execution of generated code and maybe techniques like static analysis too (I'm less sold on those, I expect plenty of malicious vulnerabilities could sneak past them.)

Coincidentally I gave a talk about sandboxing coding agents last night: https://simonwillison.net/2025/Oct/22/living-dangerously-wit...

replies(3): >>45671268 #>>45671294 #>>45673229 #
1. mritchie712 ◴[] No.45671294[source]
We started giving our (https://www.definite.app/) agent a sandbox (we use e2b.dev) and it's solved so many problems. It's created new problems, but net-net it's been a huge improvement.

Something like "where do we store temporary files the agent creates?" becomes obvious if you have a sandbox you can spin up and down in a couple seconds.