←back to thread

645 points helloplanets | 1 comments | | HN request time: 0.255s | source
Show context
alexbecker ◴[] No.45005567[source]
I doubt Comet was using any protections beyond some tuned instructions, but one thing I learned at USENIX Security a couple weeks ago is that nobody has any idea how to deal with prompt injection in a multi-turn/agentic setting.
replies(1): >>45005703 #
hoppp ◴[] No.45005703[source]
Maybe treat prompts like it was SQL strings, they need to be sanitized and preferably never exposed to external dynamic user input
replies(7): >>45005949 #>>45006195 #>>45006203 #>>45006809 #>>45007940 #>>45008268 #>>45011823 #
1. gmerc ◴[] No.45006809[source]
There's only one input into the LLM. You can't fix that https://www.linkedin.com/pulse/prompt-injection-visual-prime...