My point is, we can add all sorts of security measures but at the end of the day nothing is a replacement for user education and intention.
My point is, we can add all sorts of security measures but at the end of the day nothing is a replacement for user education and intention.
So the analogy is more like a cabin door on a 737. Some yahoo could try to open it in flight, but that doesn't justify it spontaneously blowing out at altitude.
But the elephant in the room is why are we persevering over these silly dichotomies? If you've got a problem with an AI, why not just ask the AI? Can't it clean up after making a poopy?!
For the regular user it's just a matter of changing the prompt to get a better output using a capable model. So it's a matter of education.
Of course model bias takes a role. If you train a model on racist posts you'll get a racist model. But as long as you have a fairly capable model for the average use, these edge cases aren't of interest for the user who can just adjust their prompts.