←back to thread

443 points jaredwiener | 1 comments | | HN request time: 0.202s | source
Show context
broker354690 ◴[] No.45033596[source]
Why isn't OpenAI criminally liable for this?

Last I checked:

-Signals emitted by a machine at the behest of a legal person intended to be read/heard by another legal person are legally classified as 'speech'.

-ChatGPT is just a program like Microsoft Word and not a legal person. OpenAI is a legal person, though.

-The servers running ChatGPT are owned by OpenAI.

-OpenAI willingly did business with this teenager, letting him set up an account in exchange for money. This business is a service under the control of OpenAI, not a product like a knife or gun. OpenAI intended to transmit speech to this teenager.

-A person can be liable (civilly? criminally?) for inciting another person's suicide. It is not protected speech to persuade someone into suicide.

-OpenAI produced some illegal speech and sent it to a suicidal teenager, who then committed suicide.

If Sam Altman stabbed the kid to death, it wouldn't matter if he did it on accident. Sam Altman would be at fault. You wouldn't sue or arrest the knife he used to do the deed.

Any lawyers here who can correct me, seeing as I am not one? It seems clear as day to me that OpenAI/Sam Altman directly encouraged a child to kill themselves.

replies(6): >>45033677 #>>45035753 #>>45036119 #>>45036667 #>>45036842 #>>45038959 #
mathiaspoint ◴[] No.45038959[source]
What's your argument here? Hosted LLM service shouldn't exist because they might read people's bad ideas back to them?

ChatGPT has enough guardrails now that it often refuses productive prompts. It's actually very very hard to get it to do what this person did, arguably impossible to do unintentionally.

replies(1): >>45047043 #
broker354690 ◴[] No.45047043[source]
ChatGPT is a service and thus OpenAI should be exposed to even more liability than if they had sold the LLM to the user to be accessed offline. If the user had been running a local LLM, OpenAI would not have been responsible for generating the speech.

As it stands, the human beings called OpenAI willingly did business with this child, and willingly generated the speech that persuaded him to kill himself and sent it to him. That they used a computer to do so is irrelevant.

replies(1): >>45052901 #
1. mathiaspoint ◴[] No.45052901[source]
There isn't anything they could have practically done to prevent this except not allowing kids to use it.

They may have chosen not to age restrict it because

1) It's really not practical to do that effectively

2) more importantly (and they seem to care about this more than most companies) it would push kids to less safe models like those used on character.ai

What OpenAI does now is what trying to make AI safe looks like. Most of the people arguing for "accountability" are de facto arguing for a wild west situation.