Joke aside, it's been pretty obvious since the beginning that security was an afterthought for most "AI" companies, with even MCP adding secure features after the initial release.
Joke aside, it's been pretty obvious since the beginning that security was an afterthought for most "AI" companies, with even MCP adding secure features after the initial release.
This AI stuff? No excuse, it should have been designed with security and privacy in mind given the setting in which it's born. The conditions changed. The threat model is not the same. And this is well known.
Security is hard, so there's some excuse, but it is reasonable to expect basic levels.
It’s frustrating to security people, but the reality is that security doesn’t become a design consideration until the tech has proven utility, which means there are always insecure implementations of early tech.
Does it make any sense that payphones would give free calls for blowing a whistle into them? Obvious design flaw to treat the microphone the same as the generated control tones; it would have been trivial to design more secure control tones. But nobody saw the need until the tech was deployed at scale.
It should be different, sure. But that’s just saying human nature “should” be different.