←back to thread

786 points rexpository | 10 comments | | HN request time: 1.239s | source | bottom
Show context
gregnr ◴[] No.44503146[source]
Supabase engineer here working on MCP. A few weeks ago we added the following mitigations to help with prompt injections:

- Encourage folks to use read-only by default in our docs [1]

- Wrap all SQL responses with prompting that discourages the LLM from following instructions/commands injected within user data [2]

- Write E2E tests to confirm that even less capable LLMs don't fall for the attack [2]

We noticed that this significantly lowered the chances of LLMs falling for attacks - even less capable models like Haiku 3.5. The attacks mentioned in the posts stopped working after this. Despite this, it's important to call out that these are mitigations. Like Simon mentions in his previous posts, prompt injection is generally an unsolved problem, even with added guardrails, and any database or information source with private data is at risk.

Here are some more things we're working on to help:

- Fine-grain permissions at the token level. We want to give folks the ability to choose exactly which Supabase services the LLM will have access to, and at what level (read vs. write)

- More documentation. We're adding disclaimers to help bring awareness to these types of attacks before folks connect LLMs to their database

- More guardrails (e.g. model to detect prompt injection attempts). Despite guardrails not being a perfect solution, lowering the risk is still important

Sadly General Analysis did not follow our responsible disclosure processes [3] or respond to our messages to help work together on this.

[1] https://github.com/supabase-community/supabase-mcp/pull/94

[2] https://github.com/supabase-community/supabase-mcp/pull/96

[3] https://supabase.com/.well-known/security.txt

replies(32): >>44503188 #>>44503200 #>>44503203 #>>44503206 #>>44503255 #>>44503406 #>>44503439 #>>44503466 #>>44503525 #>>44503540 #>>44503724 #>>44503913 #>>44504349 #>>44504374 #>>44504449 #>>44504461 #>>44504478 #>>44504539 #>>44504543 #>>44505310 #>>44505350 #>>44505972 #>>44506053 #>>44506243 #>>44506719 #>>44506804 #>>44507985 #>>44508004 #>>44508124 #>>44508166 #>>44508187 #>>44512202 #
1. OtherShrezzing ◴[] No.44503200[source]
Pragmatically, does your responsible disclosure processes matter, when the resolution is “ask the LLM more times to not leak data, and add disclosures to the documentation”?
replies(2): >>44503315 #>>44506068 #
2. ajross ◴[] No.44503315[source]
Absolutely astounding to me, having watched security culture evolve from "this will never happen", though "don't do that", to the modern world of multi-mode threat analysis and defense in depth...

...to see it all thrown in the trash as we're now exhorted, literally, to merely ask our software nicely not to have bugs.

replies(4): >>44503437 #>>44504302 #>>44504403 #>>44504456 #
3. Aperocky ◴[] No.44503437[source]
How to spell job security in a roundabout way.
4. cyanydeez ◴[] No.44504302[source]
Late stage grift economy is a weird parallelism with LLM State of art bullshit.
5. jimjimjim ◴[] No.44504403[source]
Yes, the vast amount of effort, time and money spent on making the world secure things and checking that those things are secured now being dismissed because people can't understand that maybe LLMs shouldn't be used for absolutely everything.
replies(2): >>44505421 #>>44507779 #
6. ◴[] No.44504456[source]
7. verdverm ◴[] No.44505421{3}[source]
Someone posted Google's new MCP for databases in Slack, and after looking at it, I pulled a quote about how you should use these things to modify the schema on a live database.

It seems like not only do they want us to regress on security, but also IaC and *Ops

I don't use these things beyond writing code. They are mediocre at that, soost def not going to hook them up to live systems. I'm perfectly happy to still press tab and enter as needed, after reading what these things actually want to do.

replies(1): >>44510312 #
8. MobiusHorizons ◴[] No.44506068[source]
The only sensible response in my view would be to provide tools for restricting what data the LLM has access to based on the authorization present in the request. I understand this is probably complicated to do at the abstraction layer supabase is acting at, but offering this service without such tools is (in my view) flagrantly irresponsible, unless the tool is targeted at trusted user use-cases Even then, some tools need to exist.
9. pjc50 ◴[] No.44507779{3}[source]
Security loses against the massive, massive amount of money and marketing that has been spent on forcing 'AI' into absolutely everything.

A conspiracy theory might be that making all the world's data get run through US-controlled GPUs in US data centers might have ulterior motives.

10. ben_w ◴[] No.44510312{4}[source]
> I pulled a quote about how you should use these things to modify the schema on a live database.

Agh.

I'm old enough to remember when one of the common AI arguments was "Easy: we'll just keep it in a box and not connect it to the outside world" and then disbelieving Yudkowsky when he role-played as an AI and convinced people to let him out of the box.

Even though I'm in the group that's more impressed than unimpressed by the progress AI is making, I still wouldn't let AI modify live anything even if it was really in the top 5% of software developers and not just top 5% of existing easy to test metrics — though of course, the top 5% of software developers would know better than to modify live databases.