←back to thread

645 points helloplanets | 1 comments | | HN request time: 0s | source
Show context
alexbecker ◴[] No.45005567[source]
I doubt Comet was using any protections beyond some tuned instructions, but one thing I learned at USENIX Security a couple weeks ago is that nobody has any idea how to deal with prompt injection in a multi-turn/agentic setting.
replies(1): >>45005703 #
hoppp ◴[] No.45005703[source]
Maybe treat prompts like it was SQL strings, they need to be sanitized and preferably never exposed to external dynamic user input
replies(7): >>45005949 #>>45006195 #>>45006203 #>>45006809 #>>45007940 #>>45008268 #>>45011823 #
prisenco ◴[] No.45006203[source]
Sanitizing free-form inputs in a natural language is a logistical nightmare, so it's likely there isn't any safe way to do that.
replies(1): >>45006325 #
hoppp ◴[] No.45006325[source]
Maybe an LLM should do it.

1st run: check and sanitize

2nd run: give to agent with privileges to do stuff

replies(3): >>45006404 #>>45006812 #>>45008085 #
1. OtherShrezzing ◴[] No.45008085{3}[source]
What stops someone prompt injecting the first LLM into passing unsanitised data to the second?