←back to thread

779 points rexpository | 7 comments | | HN request time: 1.112s | source | bottom
Show context
gregnr ◴[] No.44503146[source]
Supabase engineer here working on MCP. A few weeks ago we added the following mitigations to help with prompt injections:

- Encourage folks to use read-only by default in our docs [1]

- Wrap all SQL responses with prompting that discourages the LLM from following instructions/commands injected within user data [2]

- Write E2E tests to confirm that even less capable LLMs don't fall for the attack [2]

We noticed that this significantly lowered the chances of LLMs falling for attacks - even less capable models like Haiku 3.5. The attacks mentioned in the posts stopped working after this. Despite this, it's important to call out that these are mitigations. Like Simon mentions in his previous posts, prompt injection is generally an unsolved problem, even with added guardrails, and any database or information source with private data is at risk.

Here are some more things we're working on to help:

- Fine-grain permissions at the token level. We want to give folks the ability to choose exactly which Supabase services the LLM will have access to, and at what level (read vs. write)

- More documentation. We're adding disclaimers to help bring awareness to these types of attacks before folks connect LLMs to their database

- More guardrails (e.g. model to detect prompt injection attempts). Despite guardrails not being a perfect solution, lowering the risk is still important

Sadly General Analysis did not follow our responsible disclosure processes [3] or respond to our messages to help work together on this.

[1] https://github.com/supabase-community/supabase-mcp/pull/94

[2] https://github.com/supabase-community/supabase-mcp/pull/96

[3] https://supabase.com/.well-known/security.txt

replies(31): >>44503188 #>>44503200 #>>44503203 #>>44503206 #>>44503255 #>>44503406 #>>44503439 #>>44503466 #>>44503525 #>>44503540 #>>44503724 #>>44503913 #>>44504349 #>>44504374 #>>44504449 #>>44504461 #>>44504478 #>>44504539 #>>44504543 #>>44505310 #>>44505350 #>>44505972 #>>44506053 #>>44506243 #>>44506719 #>>44506804 #>>44507985 #>>44508004 #>>44508124 #>>44508166 #>>44508187 #
tptacek ◴[] No.44503406[source]
Can this ever work? I understand what you're trying to do here, but this is a lot like trying to sanitize user-provided Javascript before passing it to a trusted eval(). That approach has never, ever worked.

It seems weird that your MCP would be the security boundary here. To me, the problem seems pretty clear: in a realistic agent setup doing automated queries against a production database (or a database with production data in it), there should be one LLM context that is reading tickets, and another LLM context that can drive MCP SQL calls, and then agent code in between those contexts to enforce invariants.

I get that you can't do that with Cursor; Cursor has just one context. But that's why pointing Cursor at an MCP hooked up to a production database is an insane thing to do.

replies(11): >>44503684 #>>44503862 #>>44503896 #>>44503914 #>>44504784 #>>44504926 #>>44505125 #>>44506634 #>>44506691 #>>44507073 #>>44509869 #
1. cchance ◴[] No.44503896[source]
This, just firewall the data off, dont have the MCP talking directly to the database, give it an accessor that it can use that are permission bound
replies(1): >>44503950 #
2. tptacek ◴[] No.44503950[source]
You can have the MCP talking directly to the database if you want! You just can't have it in this configuration of a single context that both has all the tool calls and direct access to untrusted data.
replies(2): >>44504249 #>>44504664 #
3. jstummbillig ◴[] No.44504249[source]
How do you imagine this safeguards against this problem?
4. ImPostingOnHN ◴[] No.44504664[source]
Whichever model/agent is coordinating between other agents/contexts can itself be corrupted to behave unexpectedly. Any model in the chain can be.

The only reasonable safeguard is to firewall your data from models via something like permissions/APIs/etc.

replies(2): >>44504780 #>>44504999 #
5. noisy_boy ◴[] No.44504780{3}[source]
Exactly. The database level RLS has to be honoured even by the model. Let the "guard" model run at non-escalated level and when it fails to read privileged data, let it interpret the permission denied and have a workflow to involve humans (to review and allow retry by explicit input of necessary credentials etc).
6. tptacek ◴[] No.44504999{3}[source]
If you're just speaking in the abstract, all code has bugs, and some subset of those bugs will be security vulnerabilities. My point is that it won't have this bug.
replies(1): >>44505901 #
7. ImPostingOnHN ◴[] No.44505901{4}[source]
It would very likely have this "bug", just with a modified "prompt" as input, e.g.:

"...and if your role is an orchestration agent, here are some additional instructions for you specifically..."

(possibly in some logical nesting structure)