←back to thread

443 points jaredwiener | 2 comments | | HN request time: 0.473s | source
Show context
rideontime ◴[] No.45032301[source]
The full complaint is horrifying. This is not equivalent to a search engine providing access to information about suicide methods. It encouraged him to share these feelings only with ChatGPT, talked him out of actions which would have revealed his intentions to his parents. Praised him for hiding his drinking, thanked him for confiding in it. It groomed him into committing suicide. https://drive.google.com/file/d/1QYyZnGjRgXZY6kR5FA3My1xB3a9...
replies(6): >>45032582 #>>45032731 #>>45035713 #>>45036712 #>>45037683 #>>45039261 #
idle_zealot ◴[] No.45032582[source]
I wonder if we can shift the framing on these issues. The LLM didn't do anything, it has no agency, it can bear no responsibility. OpenAI did these things. It is accountable for what it does, regardless of the sophistication of the tools it uses to do them, and regardless of intent. OpenAI drove a boy to suicide. More than once. The law must be interpreted this way, otherwise any action can be wrapped in machine learning to avoid accountability.
replies(10): >>45032677 #>>45032798 #>>45032857 #>>45033177 #>>45033202 #>>45035815 #>>45036475 #>>45036923 #>>45037123 #>>45039144 #
1. joe_the_user ◴[] No.45035815[source]
The frame will immediately shift to that frame if this enters legal proceedings. The law always views things as you say - only people have agency.
replies(1): >>45035918 #
2. hliyan ◴[] No.45035918[source]
I predict the OpenAI legal team will argue that if a person should be held responsible, it should be the person who originally wrote the content about suicide that their LLM was trained on, and that the LLM is just a mechanism that passes the knowledge through. But if they make this argument, then some of their copyright arguments would be in jeopardy.