←back to thread

780 points rexpository | 1 comments | | HN request time: 0.202s | source
Show context
jppope ◴[] No.44505416[source]
Serious question here, not trying to give unwarranted stress to what is no doubt a stressful situation for the supabase team, or trying to create flamebait.

This whole thing feels like its obviously a bad idea to have an mcp integration directly to a database abstraction layer (the supabase product as I understand it). Why would the management push for that sort of a feature knowing that it compromises their security? I totally understand the urge to be on the bleeding edge of feature development, but this feels like the team doesn't understand GenAi and the way it works well enough to be implementing this sort of a feature into their product... are they just being too "avant-garde" in this situation or is this the way the company functions?

replies(5): >>44505432 #>>44505438 #>>44505472 #>>44505501 #>>44506821 #
1. frabcus ◴[] No.44506821[source]
I think it's a flaw in end-user MCP combined with agentic, where the end-user chooses the combination of tools. Even if the end-user is in an IDE.

The trouble is you can want an MCP server for one reason, flip it on, and a combination of the MCP servers you enabled and that you hadn't thought of suddenly breaks everything.

We need a much more robust deterministic non-LLM layer for joining together LLM capabilities across multiple systems. Or else we're expecting everyone who clicks a button in an MCP store to do extremely complex security reasoning.

Is giving an LLM running in an agentic loop every combination of even these vetted Microsoft MCP servers safe? https://code.visualstudio.com/mcp It seems unlikely.