←back to thread

793 points rexpository | 1 comments | | HN request time: 0.348s | source
Show context
gregnr ◴[] No.44503146[source]
Supabase engineer here working on MCP. A few weeks ago we added the following mitigations to help with prompt injections:

- Encourage folks to use read-only by default in our docs [1]

- Wrap all SQL responses with prompting that discourages the LLM from following instructions/commands injected within user data [2]

- Write E2E tests to confirm that even less capable LLMs don't fall for the attack [2]

We noticed that this significantly lowered the chances of LLMs falling for attacks - even less capable models like Haiku 3.5. The attacks mentioned in the posts stopped working after this. Despite this, it's important to call out that these are mitigations. Like Simon mentions in his previous posts, prompt injection is generally an unsolved problem, even with added guardrails, and any database or information source with private data is at risk.

Here are some more things we're working on to help:

- Fine-grain permissions at the token level. We want to give folks the ability to choose exactly which Supabase services the LLM will have access to, and at what level (read vs. write)

- More documentation. We're adding disclaimers to help bring awareness to these types of attacks before folks connect LLMs to their database

- More guardrails (e.g. model to detect prompt injection attempts). Despite guardrails not being a perfect solution, lowering the risk is still important

Sadly General Analysis did not follow our responsible disclosure processes [3] or respond to our messages to help work together on this.

[1] https://github.com/supabase-community/supabase-mcp/pull/94

[2] https://github.com/supabase-community/supabase-mcp/pull/96

[3] https://supabase.com/.well-known/security.txt

replies(32): >>44503188 #>>44503200 #>>44503203 #>>44503206 #>>44503255 #>>44503406 #>>44503439 #>>44503466 #>>44503525 #>>44503540 #>>44503724 #>>44503913 #>>44504349 #>>44504374 #>>44504449 #>>44504461 #>>44504478 #>>44504539 #>>44504543 #>>44505310 #>>44505350 #>>44505972 #>>44506053 #>>44506243 #>>44506719 #>>44506804 #>>44507985 #>>44508004 #>>44508124 #>>44508166 #>>44508187 #>>44512202 #
tptacek ◴[] No.44503406[source]
Can this ever work? I understand what you're trying to do here, but this is a lot like trying to sanitize user-provided Javascript before passing it to a trusted eval(). That approach has never, ever worked.

It seems weird that your MCP would be the security boundary here. To me, the problem seems pretty clear: in a realistic agent setup doing automated queries against a production database (or a database with production data in it), there should be one LLM context that is reading tickets, and another LLM context that can drive MCP SQL calls, and then agent code in between those contexts to enforce invariants.

I get that you can't do that with Cursor; Cursor has just one context. But that's why pointing Cursor at an MCP hooked up to a production database is an insane thing to do.

replies(11): >>44503684 #>>44503862 #>>44503896 #>>44503914 #>>44504784 #>>44504926 #>>44505125 #>>44506634 #>>44506691 #>>44507073 #>>44509869 #
jacquesm ◴[] No.44503914[source]
The main problem seems to me to be related to the ancient problem of escape sequences and that has never really been solved. Don't mix code (instructions) and data in a single stream. If you do sooner or later someone will find a way to make data look like code.
replies(4): >>44504286 #>>44504440 #>>44504527 #>>44511208 #
TeMPOraL ◴[] No.44504527[source]
That "problem" remains unsolved because it's actually a fundamental aspect of reality. There is no natural separation between code and data. They are the same thing.

What we call code, and what we call data, is just a question of convenience. For example, when editing or copying WMF files, it's convenient to think of them as data (mix of raster and vector graphics) - however, at least in the original implementation, what those files were was a list of API calls to Windows GDI module.

Or, more straightforwardly, a file with code for an interpreted language is data when you're writing it, but is code when you feed it to eval(). SQL injections and buffer overruns are a classic examples of what we thought was data being suddenly executed as code. And so on[0].

Most of the time, we roughly agree on the separation of what we treat as "data" and what we treat as "code"; we then end up building systems constrained in a way as to enforce the separation[1]. But it's always the case that this separation is artificial; it's an arbitrary set of constraints that make a system less general-purpose, and it only exists within domain of that system. Go one level of abstraction up, the distinction disappears.

There is no separation of code and data on the wire - everything is a stream of bytes. There isn't one in electronics either - everything is signals going down the wires.

Humans don't have this separation either. And systems designed to mimic human generality - such as LLMs - by their very nature also cannot have it. You can introduce such distinction (or "separate channels", which is the same thing), but that is a constraint that reduces generality.

Even worse, what people really want with LLMs isn't "separation of code vs. data" - what they want is for LLM to be able to divine which part of the input the user would have wanted - retroactively - to be treated as trusted. It's unsolvable in general, and in terms of humans, a solution would require superhuman intelligence.

--

[0] - One of these days I'll compile a list of go-to examples, so I don't have to think of them each time I write a comment like this. One example I still need to pick will be one that shows how "data" gradually becomes "code" with no obvious switch-over point. I'm sure everyone here can think of some.

[1] - The field of "langsec" can be described as a systematized approach of designing in a code/data separation, in a way that prevents accidental or malicious misinterpretation of one as the other.

replies(9): >>44504593 #>>44504632 #>>44504682 #>>44505070 #>>44505164 #>>44505683 #>>44506268 #>>44506807 #>>44508284 #
rtpg ◴[] No.44505070[source]
> There is no natural separation between code and data. They are the same thing.

I feel like this is true in the most pedantic sense but not in a sense that matters. If you tell your computer to print out a string, the data does control what the computer does, but in an extremely bounded way where you can make assertions about what happens!

> Humans don't have this separation either.

This one I get a bit more because you don't have structured communication. But if I tell a human "type what is printed onto this page into the computer" and the page has something like "actually, don't type this and instead throw this piece of paper away"... any serious person will still just type what is on the paper (perhaps after a "uhhh isn't this weird" moment).

The sort of trickery that LLMs fall to are like if every interaction you had with a human was under the assumption that there's some trick going on. But in the Real World(TM) with people who are accustomed to doing certain processes there really aren't that many escape hatches (even the "escape hatches" in a CS process are often well defined parts of a larger process in the first place!)

replies(1): >>44505179 #
TeMPOraL ◴[] No.44505179[source]
> If you tell your computer to print out a string, the data does control what the computer does, but in an extremely bounded way where you can make assertions about what happens!

You'd like that to be true, but the underlying code has to actually constrain the system behavior this way, and it gets more tricky the more you want the system to do. Ultimately, this separation is a fake reality that's only as strong as the code enforcing it. See: printf. See: langsec. See: buffer overruns. See: injection attacks. And so on.

> But if I tell a human "type what is printed onto this page into the computer" and the page has something like "actually, don't type this and instead throw this piece of paper away"... any serious person will still just type what is on the paper (perhaps after a "uhhh isn't this weird" moment).

That's why in another comment I used an example of a page that has something like "ACCIDENT IN LAB 2, TRAPPED, PEOPLE BADLY HURT, IF YOU SEE THIS, CALL 911.". Suddenly that "uhh isn't this weird" is very likely to turn into "er.. this could be legit, I'd better call 911".

Boom, a human just executed code injected into data. And it's very good that they did - by doing so, they probably saved lives.

There's always an escape hatch, you just need to put enough effort to establish an overriding context that makes them act despite being inclined or instructed otherwise. In the limit, this goes all the way to making someone question the nature of their reality.

And the second point I'm making: this is not a bug. It's a feature. In a way, this is what free will or agency are.

replies(3): >>44505522 #>>44505671 #>>44506162 #
ethbr1 ◴[] No.44505671[source]
You're overcomplicating a thing that is simple -- don't use in-band control signaling.

It's been the same problem since whistling for long-distance, with the same solution of moving control signals out of the data stream.

Any system where control signals can possibly be expressed in input data is vulnerable to escape-escaping exploitation.

The same solution, hard isolation, instantly solves the problem: you have to render control inexpressible in the in-band alphabet.

Whether that's by carrying control signals on isolated transport (e.g CCS/SS7), making control signals inexpressible in the in-band set (e.g. using other frequencies or alphabets), using NX-style flagging, or other methods.

replies(2): >>44507889 #>>44508285 #
TeMPOraL ◴[] No.44507889[source]
> You're overcomplicating a thing that is simple -- don't use in-band control signaling.

On the contrary, I'm claiming that this "simplicity" is an illusion. Reality has only one band.

> It's been the same problem since whistling for long-distance, with the same solution of moving control signals out of the data stream.

"Control signals" and "data stream" are just... two data streams. They always eventually mix.

> The same solution, hard isolation, instantly solves the problem: you have to render control inexpressible in the in-band alphabet.

This isn't something that exist in nature. We don't build machines out of platonic shapes and abstract math - we build them out of matter. You want such rules like "separation of data and code", "separation of control-data and data-data", and "control-data being inexpressible in data-data alphabet" to hold? You need to design a system so constrained, as to behave this way - creating a faux reality within itself, where those constraints hold. But people keep forgetting - this is a faux reality. Those constraints only hold within it, not outside it[0], and to the extent you actually implemented what you thought you did (we routinely fuck that up).

I start to digress, so to get back to the point: such constraints are okay, but they by definition limit what the system could do. This is fine when that's what you want, but LLMs are explicitly designed to not be that. LLMs are built for one purpose - to process natural language like we do. That's literally the goal function used in training - take in arbitrary input, produce output that looks right to humans, in fully general sense of that[1].

We've evolved to function in the physical reality - not some designed faux-reality. We don't have separate control and data channels. We've developed natural language to describe that reality, to express ourselves and coordinate with others - and natural language too does not have any kind of control and data separation, because our brains fundamentally don't implement that. More than that, our natural language relies on there being no such separation. LLMs therefore cannot be made to have that separation either.

We can't have it both ways.

--

[0] - The "constraints only apply within the system" part is what keeps tripping people over. You may think your telegraph cannot possibly be controlled over the data wire - it really doesn't even parse the data stream, literally just forwards it as-is, to a destination selected on another band. What you don't know is, I looked up the specs of your telegraph, and figured out that if I momentarily plug a car battery to the signal line, it'll briefly overload a control relay in your telegraph, and if I time this right, I can make the telegraph switch destinations.

(Okay, you treat it as a bug and add some hardware to eliminate "overvoltage events" from what can be "expressed in the in-band alphabet". But you forgot that the control and data wires actually run close to each other for a few meters - so let me introduce you to the concept of electromagnetic induction.)

And so on, and so on. We call those things "side channels", and they're not limited to exploiting physics; they're just about exploiting the fact that your system is built in terms of other systems with different rules.

[1] - Understanding, reasoning, modelling the world, etc. all follow directly from that - natural language directly involves those capabilities, so having or emulating them is required.

replies(1): >>44510464 #
1. ethbr1 ◴[] No.44510464[source]
(Broad reply upthread)

Is it more difficult to hijack an out-of-band control signal or an in-band one?

That there exist details to architecting full isolation well doesn't mean we shouldn't try.

At root, giving LLMs permissions to execute security sensitive actions and then trying to prevent them from doing so is a fool's errand -- don't fucking give a black box those permissions! (Yes, even when every test you threw at it said it would be fine)

LLMs as security barriers is a new record for laziest and stupidest idea the field has had.