←back to thread

645 points helloplanets | 1 comments | | HN request time: 0s | source
Show context
alexbecker ◴[] No.45005567[source]
I doubt Comet was using any protections beyond some tuned instructions, but one thing I learned at USENIX Security a couple weeks ago is that nobody has any idea how to deal with prompt injection in a multi-turn/agentic setting.
replies(1): >>45005703 #
hoppp ◴[] No.45005703[source]
Maybe treat prompts like it was SQL strings, they need to be sanitized and preferably never exposed to external dynamic user input
replies(7): >>45005949 #>>45006195 #>>45006203 #>>45006809 #>>45007940 #>>45008268 #>>45011823 #
1. alexbecker ◴[] No.45006195[source]
The problem is there is no real way to separate "data" and "instructions" in LLMs like there is for SQL