I think this particular exploit crosses multiple trust boundaries, between the LLM, the MCP server, and Supabase. You will need protection at each point in that chain, not just the LLM prompt itself. The LLM could be protected with prompt injection guardrails, the MCP server should be properly scoped with the correct authn/authz credentials for the user/session of the current LLMs context, and the permissions there-in should be reflected in the user account issuing those keys from Supabase. These protections would significantly reduce the surface area of this type of attack, and there are plenty of examples of these measures being put in place in production systems.
The documentation from Supabase lists development environment examples for connecting MCP servers to AI Coding assistants. I would never allow that same MCP server to be connected to production environment without the above security measures in place, but it's likely fine for development environment with dummy data. It's not clear to me that Supabase was implying any production use cases with their MCP support, so I'm not sure I agree with the severity of this security concern.