Before we even get into the technical underpinnings and issues, there's a logical problem that should have stopped seasoned technologists dead in their tracks from going further, and that is:
> What are the probable issues we will encounter once we release this model into the wild, and how what is the worst that can probably happen.
The answer to that thought-experiment should have foretold this very problem, and that would have been the end of this feature.
This is not a nuanced problem, and it does not take more than an intro-level knowledge of security flaws to see. Allowing an actor (I am sighing as I say this, but "Whether human or not") to input whatever they would like is a recipe for disaster and has been since the advent of interconnected computers.
The reason why this particularly real and not-easy-to-solve vulnerability made it this far (and permeates every MCP as far as I can tell) is because there is a butt-load (technical term) of money from VCs and other types of investors available to founders if they slap the term "AI" on something, and because the easy surface level stuff is already being thought of, why not revolutionize software development by making it as easy as typing a few words into a prompt?
Programmers are expensive! Typing is not! Let's make programmers nothing more than typists!
And because of the pursuit of funding or of a get-rich-quick mentality, we're not only moving faster and with reckless abandon, we've also abandoned all good sense.
Of course, for some of us, this is going to turn out to be a nice payday. For others, the ones that have to deal with the data breaches and real-world effects of unleashing AI on everything, it's going to suck, and it's going to keep sucking. Rational thought and money do not mix, and this is another example of that problem at work.