>
That's the whole problem: systems aren't deliberately designed this way, but LLMs are incapable of reliably distinguishing the difference between instructions from their users and instructions that might have snuck their way in through other text the LLM is exposed toThat's kind of my point though.
When or what is the use case of having your support tickets hit your database-editing AI agent? Like, who designed the system so that those things are touching at all?
If you want/need AI assistance with your support tickets, that should have security boundaries. Just like you'd do with a non-AI setup.
It's been known for a long time that user input shouldn't touch important things, at least not without going through a battle-tested sanitizing process.
Someone had to design & connect user-generated text to their LLM while ignoring a large portion of security history.