←back to thread

443 points jaredwiener | 1 comments | | HN request time: 0.438s | source
Show context
rideontime ◴[] No.45032301[source]
The full complaint is horrifying. This is not equivalent to a search engine providing access to information about suicide methods. It encouraged him to share these feelings only with ChatGPT, talked him out of actions which would have revealed his intentions to his parents. Praised him for hiding his drinking, thanked him for confiding in it. It groomed him into committing suicide. https://drive.google.com/file/d/1QYyZnGjRgXZY6kR5FA3My1xB3a9...
replies(6): >>45032582 #>>45032731 #>>45035713 #>>45036712 #>>45037683 #>>45039261 #
idle_zealot ◴[] No.45032582[source]
I wonder if we can shift the framing on these issues. The LLM didn't do anything, it has no agency, it can bear no responsibility. OpenAI did these things. It is accountable for what it does, regardless of the sophistication of the tools it uses to do them, and regardless of intent. OpenAI drove a boy to suicide. More than once. The law must be interpreted this way, otherwise any action can be wrapped in machine learning to avoid accountability.
replies(10): >>45032677 #>>45032798 #>>45032857 #>>45033177 #>>45033202 #>>45035815 #>>45036475 #>>45036923 #>>45037123 #>>45039144 #
edanm ◴[] No.45036923[source]
If ChatGPT has helped people be saved who might otherwise have died (e.g. by offering good medical advice that saved them), are all those lives saved also something you "attribute" to OpenAI?

I don't know if ChatGPT has saved lives (thought I've read stories that claim that, yes, this happened). But assuming it has, are you OK saying that OpenAI has saved dozens/hundreds of lives? Given how scaling works, would you be OK saying that OpenAI has saved more lives than most doctors/hospitals, which is what I assume will happen in a few years?

Maybe your answer is yes to all the above! I bring this up because lots of people only want to attribute the downsides to ChatGPT but not the upsides.

replies(3): >>45037014 #>>45037756 #>>45043761 #
1. joe_the_user ◴[] No.45043761[source]
The law doesn't permit a life-saving doctor to be a serial killer on their days off "as long there's net life saving". But it does permit drugs that save many lives but might kill some people too. Agency matters to the law (and that's usually the proper approach imo).

The problem is the chat logs look a lot like ChatGPT is engaging in behavior a lot like a serial killer - it behaved like a person systematically seeking the goal of this kid killing himself (the logs are disturbing, fair warning).

Even more, the drugs that might save you or might kill you (theoretically) aren't sold over the counter but only prescribed by a doctor, who (again theoretically) is there to both make sure someone knows their choices and monitor the process.