there should be a "black box" warning prominent on every chatbox message from AI, like "This is AI guidance which can potentially result in grave bodily harm to yourself and others."
Should we really demand this of every AI chat application to potentially avert a negative response from the tiny minority of users that blindly follow what they’re told?
Who is going to enforce this? What if I host a private AI model for 3 users. Do I need that and what is the punishment for non compliance?
You see where I’m going with this. The problem with your sentiment is that as soon as you draw a line it must be defined in excruciating detail or you risk unintended consequences.