←back to thread

42 points Liwink | 1 comments | | HN request time: 0.208s | source
Show context
DaveZale ◴[] No.45088683[source]
why is this stuff legal?

there should be a "black box" warning prominent on every chatbox message from AI, like "This is AI guidance which can potentially result in grave bodily harm to yourself and others."

replies(5): >>45088906 #>>45089060 #>>45089481 #>>45089639 #>>45092721 #
1. tomasphan ◴[] No.45089481[source]
Should we really demand this of every AI chat application to potentially avert a negative response from the tiny minority of users that blindly follow what they’re told? Who is going to enforce this? What if I host a private AI model for 3 users. Do I need that and what is the punishment for non compliance? You see where I’m going with this. The problem with your sentiment is that as soon as you draw a line it must be defined in excruciating detail or you risk unintended consequences.