←back to thread

780 points rexpository | 1 comments | | HN request time: 0s | source
Show context
gregnr ◴[] No.44503146[source]
Supabase engineer here working on MCP. A few weeks ago we added the following mitigations to help with prompt injections:

- Encourage folks to use read-only by default in our docs [1]

- Wrap all SQL responses with prompting that discourages the LLM from following instructions/commands injected within user data [2]

- Write E2E tests to confirm that even less capable LLMs don't fall for the attack [2]

We noticed that this significantly lowered the chances of LLMs falling for attacks - even less capable models like Haiku 3.5. The attacks mentioned in the posts stopped working after this. Despite this, it's important to call out that these are mitigations. Like Simon mentions in his previous posts, prompt injection is generally an unsolved problem, even with added guardrails, and any database or information source with private data is at risk.

Here are some more things we're working on to help:

- Fine-grain permissions at the token level. We want to give folks the ability to choose exactly which Supabase services the LLM will have access to, and at what level (read vs. write)

- More documentation. We're adding disclaimers to help bring awareness to these types of attacks before folks connect LLMs to their database

- More guardrails (e.g. model to detect prompt injection attempts). Despite guardrails not being a perfect solution, lowering the risk is still important

Sadly General Analysis did not follow our responsible disclosure processes [3] or respond to our messages to help work together on this.

[1] https://github.com/supabase-community/supabase-mcp/pull/94

[2] https://github.com/supabase-community/supabase-mcp/pull/96

[3] https://supabase.com/.well-known/security.txt

replies(31): >>44503188 #>>44503200 #>>44503203 #>>44503206 #>>44503255 #>>44503406 #>>44503439 #>>44503466 #>>44503525 #>>44503540 #>>44503724 #>>44503913 #>>44504349 #>>44504374 #>>44504449 #>>44504461 #>>44504478 #>>44504539 #>>44504543 #>>44505310 #>>44505350 #>>44505972 #>>44506053 #>>44506243 #>>44506719 #>>44506804 #>>44507985 #>>44508004 #>>44508124 #>>44508166 #>>44508187 #
1. dante1441 ◴[] No.44505972[source]
All do respect to the efforts here to make things more secure, but this doesn't make much sense to me.

How can an individual MCP server assess prompt injection threats for my use case?

Why is it the Supabase MCP server's job to sanitize the text that I have in my database rows? How does it know what I intend to use that data for?

What if I have a database of prompt injection examples I am using for a training? Supabase MCP is going to amend this data?

What if I'm building an app where the rows are supposed to be instructions?

What if I don't use MCP and I'm just using Supabase APIs directly in my agent code? Is Supabase going to sanitize the API output as well?

We all know that even if you "Wrap all SQL responses with prompting that discourages the LLM from following instructions/commands injected within user data" future instructions can still override this. Ie this is exactly why you have to add these additional instructions in the first place because the returned values override previous instructions!

You don't have to use obvious instruction / commands / assertive language to prompt inject. There are a million different ways to express the same intent in natural language, and a gazillion different use cases of how applications will be using Supabase MCP results. How confident are you that you will catch them all with E2E tests? This feels like a huge game of whack-a-mole.

Great if you are adding more guardrails for Supabase MCP server. But what about all the other MCP servers? All it takes is a client connected to one other MCP server that returns a malicious response to use the Supabase MCP Server (even correctly within your guardrails) and then use that response however it sees fit.

All in all I think effort like this will give us a false sense of security. Yes they may reduce chances for some specific prompt injections a bit - which sure we should do. But just because they and turn some example Evals or E2E tests green we should not feel good and safe and that the job is done. At the end of the day the system is still inherently insecure, and not deterministically patched. It only takes 1 breach for a catastrophe.