←back to thread

645 points helloplanets | 1 comments | | HN request time: 0s | source
Show context
gtirloni ◴[] No.45005076[source]
Nobody could have predicted this /s

Joke aside, it's been pretty obvious since the beginning that security was an afterthought for most "AI" companies, with even MCP adding secure features after the initial release.

replies(1): >>45005124 #
brookst ◴[] No.45005124[source]
How does this compare to the way security was implemented by early websites, internet protocols, or telecom systems?
replies(5): >>45005206 #>>45005207 #>>45005488 #>>45007680 #>>45010326 #
SoftTalker ◴[] No.45005207[source]
Must we learn the same lessons over and over again? Why? Is our industry particularly stupid? Or just lazy?
replies(6): >>45005267 #>>45005312 #>>45005369 #>>45005463 #>>45005776 #>>45011042 #
px43 ◴[] No.45005776[source]
Information security is, fundamentally, a misalignment of expected capabilities with new technologies.

There is literally no way a new technology can be "secure" until it has existed in the public zeitgeist for long enough that the general public has an intuitive feel for its capabilities and limitations.

Yes, when you release a new product, you can ensure that its functionality aligns with expectations from other products in the industry, or analogous products that people are already using. You can make design choices where a user has to slowly expose themselves to more functionality as they understand the technology deeper, but each step of the way is going to expose them to additional threats that they might not fully understand.

Security is that journey. You can just release a product using a brand new technology that's "secure" right out of the gate.

replies(2): >>45005839 #>>45011796 #
1. brookst ◴[] No.45005839[source]
+1

And if you tried it wouldn’t be usable, and you’d probably get the threat model wrong anyway.