←back to thread

784 points rexpository | 1 comments | | HN request time: 0.214s | source
1. btown ◴[] No.44503248[source]
It’s a great reminder that (a) your prod database likely contains some text submitted by users that tries a prompt injection attack, and (b) at some point some developer is going to run something that feeds that text to an LLM that has access to other tools.

It should be a best practice to run any tool output - from a database, from a web search - through a sanitizer that flags anything prompt-injection-like for human review. A cheap and quick LLM could do screening before the tool output gets to the agent itself. Surprised this isn’t more widespread!