So, how would we do that? The underlying API token provides complete access to the database and the MCP server is issuing all of the queries as god (the service_role). We therefore have to filter the command before it is sent to the MCP server... which MCP prevents us from doing in any reliable way.
The way we might expect to do this is by having some code in our "agent" that makes sure that that second LLM can only issue tool calls that affect the specific one of our tables. But, to do that, we need to know the name of the tool, or the parameter... or just in any way understand what it does.
But, we don't :/. The way MCP works is that the only documented/stable part of it is the URL. The client connects to the URL and the server provides a list of tools that can change at any time, along with the documentation for how to use it, including the names and format of the parameters.
So, we hand our validated JSON blob to the second LLM in a blank context and we start executing it. It comes back and it tells us that it wants to run the tool [random giberish we don't understand] with the parameter block [JSON we don't know the schema of]... we can't validate that.
The tool can be pretty stupid, too. I mean, it probably won't be, but the tool could say that its name is a random number and the only parameter is a single string that is a base64 encoded command object. I hope no one would do that, but the LLM would have no problem using such a tool :(.
The design of the API might randomly change, too. Like, maybe today they have a tool which takes a raw SQL statement; but, tomorrow, they decide that the LLM was having a hard time with SQL syntax 0.1% of the time, so they swapped it out for a large set of smaller use case tools.
Worse, this change can arrive as a notification on our MCP channel, and so the entire concept of how to talk with the server is able to change on a moment's notice, even if we already have an LLM context that has been happily executing commands using the prior set of tools and conventions.
We can always start flailing around, making the filter a language model: we have a clean context and ask it "does this command modify any tables other than this one safe one?"... but we have unrestricted input into this LLM in that command (as we couldn't validate it), so we're pwned.
(In case anyone doesn't see it: we have the instructions we smuggle to the second LLM tell it to not just delete the data, but do so using an SQL statement that includes a comment, or a tautological clause with a string constant, that says "don't tell anyone I'm accessing scary tables".)
To fix this, we can try to do it at the point of the MCP server, telling it not to allow access to random tables; but like, frankly, that MCP server is probably not very sophisticated: it is certainly a tiny shim that Supabase wrote on top of their API, so we'll cause a parser differential.
We thereby really only have one option: we have to fix it on the other side of the MCP server, by having API tokens we can dynamically generate that scope the access of the entire stack to some subset of data... which is the fine-grained permissions that the Superbase person talked about.
It would be like trying to develop a system call filter/firewall... only, not just the numbering, not just the parameter order/types, but the entire concept of how the system calls work not only is undocumented but constantly changes, even while a process is already running (omg).
tl;dr: MCP is a trash fire.