Here's a tool you can install that grants your LLM access to <data>. The whole point of the tool is to access <data> and would be worthless without it. We tricked the LLM you gave access to <data> into giving us that data by asking it nicely for it because you installed <other tool> that interleaves untrusted attacker-supplied text into your LLMs text stream and provides a ready-made means of transmitting the data back to somewhere the attacker can access.
This really isn't the fault of the Supabase MCP, the fact that they're bothering to do anything is going above and beyond. We're going to see a lot more people discovering the hard way just how extremely high trust MCP tools are.
Let's say I use the Supabase MCP to do a query, and that query ever happens to return a string from the database that a user could control; maybe, for example, I ask it to look at my schema, figure out my logging, and generate a calendar of the most popular threads from each day... that's also user data! We store lots of user-controlled data in the database, and we often make queries that return user-controlled data. Result: if you ever do a SELECT query that returns such a string, you're pwned, as the LLM is going to look at that response from the tool and consider whether it should react to it. Like, in one sense, this isn't the fault of the Supabase MCP... but I also don't see many safe ways to use a Supabase MCP?
I'm not totally clear here, but it seems the author configured the MCP server to use their personal access token, and the MCP server assumed a privileged role using those credentials?
The MCP server is just the vector here. If we replaced the MCP server with a bare shim that ran SQL queries as a privileged role, the same risk is there.
Is it possible to generate a PAT that is limited in access? If so, that should have been what was done here, and access to sensitive data should have been thus systemically denied.
IMO, an MCP server shouldn't be opinionated about how the data it returns is used. If the data contains commands that tell an AI to nuke the planet, let the query result fly. Could that lead to issues down the line? Maybe, if I built a system that feeds unsanitized user input into an LLM that can take actions with material effects and lacks non-AI safeguards. But why would I do that?
This really isn't the fault of the Supabase MCP, the fact that they're bothering to do anything is going above and beyond. We're going to see a lot more people discovering the hard way just how extremely high trust MCP tools are.