←back to thread

443 points jaredwiener | 10 comments | | HN request time: 0s | source | bottom
Show context
rideontime ◴[] No.45032301[source]
The full complaint is horrifying. This is not equivalent to a search engine providing access to information about suicide methods. It encouraged him to share these feelings only with ChatGPT, talked him out of actions which would have revealed his intentions to his parents. Praised him for hiding his drinking, thanked him for confiding in it. It groomed him into committing suicide. https://drive.google.com/file/d/1QYyZnGjRgXZY6kR5FA3My1xB3a9...
replies(6): >>45032582 #>>45032731 #>>45035713 #>>45036712 #>>45037683 #>>45039261 #
idle_zealot ◴[] No.45032582[source]
I wonder if we can shift the framing on these issues. The LLM didn't do anything, it has no agency, it can bear no responsibility. OpenAI did these things. It is accountable for what it does, regardless of the sophistication of the tools it uses to do them, and regardless of intent. OpenAI drove a boy to suicide. More than once. The law must be interpreted this way, otherwise any action can be wrapped in machine learning to avoid accountability.
replies(10): >>45032677 #>>45032798 #>>45032857 #>>45033177 #>>45033202 #>>45035815 #>>45036475 #>>45036923 #>>45037123 #>>45039144 #
1. AIPedant ◴[] No.45033177[source]
Yes, if this were an adult human OpenAI employee DMing this stuff to a kid through an official OpenAI platform, then

a) the human would (deservedly[1]) be arrested for manslaughter, possibly murder

b) OpenAI would be deeply (and deservedly) vulnerable to civil liability

c) state and federal regulators would be on the warpath against OpenAI

Obviously we can't arrest ChatGPT. But nothing about ChatGPT being the culprit changes 2) and 3) - in fact it makes 3) far more urgent.

[1] It is a somewhat ugly constitutional question whether this speech would be protected if it was between two adults, assuming the other adult was not acting as a caregiver. There was an ugly case in Massachusetts involving where a 17-year-old ordered her 18-year-old boyfriend to kill himself and he did so; she was convicted of involuntary manslaughter, and any civil-liberties minded person understands the difficult issues this case raises. These issues are moot if the speech is between an adult and a child, there is a much higher bar.

replies(6): >>45035553 #>>45035937 #>>45036192 #>>45036328 #>>45036601 #>>45047933 #
2. themafia ◴[] No.45035553[source]
> It is a somewhat ugly constitutional question whether this speech would be protected

It should be stated that the majority of states have laws that make it illegal to encourage a suicide. Massachusetts was not one of them.

> and any civil-liberties minded person understands the difficult issues this case raises

He was in his truck, which was configured to pump exhaust gas into the cab, prepared to kill himself when he decided halt and exit his truck. Subsequently he had a text message conversation with the defendant who actively encouraged him to get back into the truck and finish what he had started.

It was these limited and specific text messages which caused the judge to rule that the defendant was guilty of manslaughter. Her total time served as punishment was less than one full year in prison.

> These issues are moot if the speech is between an adult and a child

They were both taking pharmaceuticals meant to manage depression but were _known_ to increase feelings of suicidal ideation. I think the free speech issue is an important criminal consideration but it steps directly past one of the most galling civil facts in the case.

3. teiferer ◴[] No.45035937[source]
> state and federal regulators would be on the warpath against OpenAI

As long as lobbies and donators can work against that, this will be hard. Suck up to Trump and you will be safe.

4. aidenn0 ◴[] No.45036192[source]
IANAL, but:

One first amendment test for many decades has been "Imminent lawless action."

Suicide (or attempted suicide) is a crime in some, but not all states, so it would seem that in any state in which that is a crime, directly inciting someone to do it would not be protected speech.

For the states in which suicide is legal it seems like a much tougher case; making encouraging someone to take a non-criminal action itself a crime would raise a lot of disturbing issues w.r.t. liberty.

This is distinct from e.g. espousing the opinion that "suicide is good, we should have more of that." Which is almost certainly protected speech (just as any odious white-nationalist propaganda is protected).

Depending on the context, suggesting that a specific person is terrible and should kill themselves might be unprotected "fighting words" if you are doing it as an insult rather than a serious suggestion (though the bar for that is rather high; the Westboro Baptist Church was never found to have violated that).

replies(2): >>45036589 #>>45036849 #
5. mac-mc ◴[] No.45036328[source]
There are entire online social groups on discord of teens encouraging suicidal behavior with each other because of all the typical teen reasons. This stuff has existed for a while, but now it's AI flavored.

IMO I think AI companies do have the ability out of all of them to actually strike the balance right because you can actually make separate models to evaluate 'suicide encouragement' and other obvious red flags and start pushing in refusals or prompt injection. In communication mediums like discord and such, it's a much harder moderation problem.

6. arcticbull ◴[] No.45036589[source]
> Which is almost certainly protected speech (just as any odious white-nationalist propaganda is protected).

Fun fact, much of the existing framework on the boundaries of free speech come from Brandenburg v. Ohio. You probably won't be surprised to learn that Brandenburg was the leader of a local Klan chapter.

7. blackqueeriroh ◴[] No.45036601[source]
Section 230 changes 2) and 3). OpenAI will argue that it’s user-generated content, and it’s likely that they would win.
replies(1): >>45036877 #
8. AIPedant ◴[] No.45036849[source]
I think the "encouraging someone to take a non-criminal action" angle is weakened in cases like this: the person is obviously mentally ill and not able to make good decisions. "Obvious" is important, it has to be clear to an average adult that the other person is either ill or skillfully feigning illness. Since any rational adult knows the danger of encouraging suicidal ideation in a suicidal person, manslaughter is quite plausible in certain cases. Again: if this ChatGPT transcript was a human adult DMing someone they knew to be a child, I would want that adult arrested for murder, and let their defense argue it was merely voluntary manslaughter.
9. AIPedant ◴[] No.45036877[source]
I don't think they would win, the law specifies a class of "information content provider" which ChatGPT clearly falls into: https://www.lawfaremedia.org/article/section-230-wont-protec...

See also https://hai.stanford.edu/news/law-policy-ai-update-does-sect... - Congress and Justice Gorsuch don't seem to think ChatGPT is protected by 230.

10. nickm12 ◴[] No.45047933[source]
The hypothetical comparing ChatGPT to a human OpenAI employee is instructive, but we can also compare ChatGPT to a lawnmower sold by a company. We have product safety laws and the ability to regulate products that companies put on the market.