Joke aside, it's been pretty obvious since the beginning that security was an afterthought for most "AI" companies, with even MCP adding secure features after the initial release.
Joke aside, it's been pretty obvious since the beginning that security was an afterthought for most "AI" companies, with even MCP adding secure features after the initial release.
It's hard to sell what your product specifically can't do, while your competitors are spending their time building out what they can do. Beloved products can make a whole lot of serious mistakes before the public will actually turn on them.
We need to stop calling ourselves engineers when we act like garage tinkerers.
Or, we need to actually regulate software that can have devastating failure modes such as "emptying your bank account" so that companies selling software to the public (directly or indirectly) cannot externalize the costs of their software architecture decisions.
Simply prohibiting disclaimer of liability in commercial software licenses might be enough.
There is literally no way a new technology can be "secure" until it has existed in the public zeitgeist for long enough that the general public has an intuitive feel for its capabilities and limitations.
Yes, when you release a new product, you can ensure that its functionality aligns with expectations from other products in the industry, or analogous products that people are already using. You can make design choices where a user has to slowly expose themselves to more functionality as they understand the technology deeper, but each step of the way is going to expose them to additional threats that they might not fully understand.
Security is that journey. You can just release a product using a brand new technology that's "secure" right out of the gate.
It's only when someone tries to drive their loaded ox-driven cart through for the first time that you might find out what the max load of your bridge is.
Everyone who has their head screwed on right could tell you that this is an awful idea, for precisely these reasons, and we've known it for years. Maybe not their users if they haven't been exposed to LLMs to that degree, but certainly anyone who worked on this product should've known better, and if they didn't, then my opinion of this entire industry just fell through the floor.
This is tantamount to using SQL escaping instead of prepared statements in 2025. Except there's no equivalent to prepared statements in LLMs, so we know that mixing sensitive data with untrusted data shouldn't be done until we have the technical means to do it safely.
Doing it anyway when we've known about these risks for years is just negligence, and trying to use it as an excuse in 2025 points at total incompetence and indifference towards user safety.
If companies were fined serious amounts of money and the people responsible went to prison if they committed gross negligence and harmed millions of people, the attitude would quickly change. But as things stand, the system optimizes for carelessness, indifference towards harm, and sociopathy.