←back to thread

443 points jaredwiener | 1 comments | | HN request time: 0.2s | source
Show context
rideontime ◴[] No.45032301[source]
The full complaint is horrifying. This is not equivalent to a search engine providing access to information about suicide methods. It encouraged him to share these feelings only with ChatGPT, talked him out of actions which would have revealed his intentions to his parents. Praised him for hiding his drinking, thanked him for confiding in it. It groomed him into committing suicide. https://drive.google.com/file/d/1QYyZnGjRgXZY6kR5FA3My1xB3a9...
replies(6): >>45032582 #>>45032731 #>>45035713 #>>45036712 #>>45037683 #>>45039261 #
idle_zealot ◴[] No.45032582[source]
I wonder if we can shift the framing on these issues. The LLM didn't do anything, it has no agency, it can bear no responsibility. OpenAI did these things. It is accountable for what it does, regardless of the sophistication of the tools it uses to do them, and regardless of intent. OpenAI drove a boy to suicide. More than once. The law must be interpreted this way, otherwise any action can be wrapped in machine learning to avoid accountability.
replies(10): >>45032677 #>>45032798 #>>45032857 #>>45033177 #>>45033202 #>>45035815 #>>45036475 #>>45036923 #>>45037123 #>>45039144 #
AIPedant ◴[] No.45033177[source]
Yes, if this were an adult human OpenAI employee DMing this stuff to a kid through an official OpenAI platform, then

a) the human would (deservedly[1]) be arrested for manslaughter, possibly murder

b) OpenAI would be deeply (and deservedly) vulnerable to civil liability

c) state and federal regulators would be on the warpath against OpenAI

Obviously we can't arrest ChatGPT. But nothing about ChatGPT being the culprit changes 2) and 3) - in fact it makes 3) far more urgent.

[1] It is a somewhat ugly constitutional question whether this speech would be protected if it was between two adults, assuming the other adult was not acting as a caregiver. There was an ugly case in Massachusetts involving where a 17-year-old ordered her 18-year-old boyfriend to kill himself and he did so; she was convicted of involuntary manslaughter, and any civil-liberties minded person understands the difficult issues this case raises. These issues are moot if the speech is between an adult and a child, there is a much higher bar.

replies(6): >>45035553 #>>45035937 #>>45036192 #>>45036328 #>>45036601 #>>45047933 #
1. mac-mc ◴[] No.45036328[source]
There are entire online social groups on discord of teens encouraging suicidal behavior with each other because of all the typical teen reasons. This stuff has existed for a while, but now it's AI flavored.

IMO I think AI companies do have the ability out of all of them to actually strike the balance right because you can actually make separate models to evaluate 'suicide encouragement' and other obvious red flags and start pushing in refusals or prompt injection. In communication mediums like discord and such, it's a much harder moderation problem.