←back to thread

146 points jakozaur | 2 comments | | HN request time: 0s | source
Show context
splittydev ◴[] No.45670263[source]
All of these are incredibly obvious. If you have even the slightest idea of what you're doing and review the code before deploying it to prod, this will never succeed.

If you have absolutely no idea what you're doing, well, then it doesn't really matter in the end, does it? You're never gonna recognize any security vulnerabilities (as has happened many times with LLM-assisted "no-code" platforms and without any actual malicious intent), and you're going to deploy unsafe code either way.

replies(2): >>45670324 #>>45671085 #
1. BoiledCabbage ◴[] No.45671085[source]
> All of these are incredibly obvious. If you have even the slightest idea of what you're doing and review the code before deploying it to prod, this will never succeed.

Well this is wrong. And it's exactly this type of thinking why people will get absolutely burned by this.

First off the fact they chose obvious exploits for explanatory purposes doesn't mean this attack only supports obvious exploits...

And to your second point of "review the code before you deploy to prod", the second attack did not involve deploying any code to prod. It involved an LLM reading a reddit comment or github comment and immediately executing.

People not taking security seriously and waving it off as trivial is what's gonna make this such a terrible problem.

replies(1): >>45673906 #
2. thayne ◴[] No.45673906[source]
> It involved an LLM reading a reddit comment or github comment and immediately executing.

right, so you shouldn't give the LLM access to execute arbitrary commands without review.