←back to thread

443 points jaredwiener | 1 comments | | HN request time: 0.225s | source
Show context
rideontime ◴[] No.45032301[source]
The full complaint is horrifying. This is not equivalent to a search engine providing access to information about suicide methods. It encouraged him to share these feelings only with ChatGPT, talked him out of actions which would have revealed his intentions to his parents. Praised him for hiding his drinking, thanked him for confiding in it. It groomed him into committing suicide. https://drive.google.com/file/d/1QYyZnGjRgXZY6kR5FA3My1xB3a9...
replies(6): >>45032582 #>>45032731 #>>45035713 #>>45036712 #>>45037683 #>>45039261 #
idle_zealot ◴[] No.45032582[source]
I wonder if we can shift the framing on these issues. The LLM didn't do anything, it has no agency, it can bear no responsibility. OpenAI did these things. It is accountable for what it does, regardless of the sophistication of the tools it uses to do them, and regardless of intent. OpenAI drove a boy to suicide. More than once. The law must be interpreted this way, otherwise any action can be wrapped in machine learning to avoid accountability.
replies(10): >>45032677 #>>45032798 #>>45032857 #>>45033177 #>>45033202 #>>45035815 #>>45036475 #>>45036923 #>>45037123 #>>45039144 #
1. bell-cot ◴[] No.45039144[source]
Yeah...but rather than get into ever-fancier legal and philosophical arguments about the LLM's agency, I'd like to see the justice system just rotate the table:

"The court agrees with your argument that you are not responsible for the horrible things that happened to the victim, as a consequence of your LLM's decisions. But similarly, the court will not be responsible for the horrible things that will be happening to you, because our LLM's decisions."

(No - it doesn't much matter whether that is actually done. Vs. used as a rhetorical banhammer, to shut down the "we're not responsible" BS.)