←back to thread

443 points jaredwiener | 1 comments | | HN request time: 0.233s | source
Show context
broker354690 ◴[] No.45033596[source]
Why isn't OpenAI criminally liable for this?

Last I checked:

-Signals emitted by a machine at the behest of a legal person intended to be read/heard by another legal person are legally classified as 'speech'.

-ChatGPT is just a program like Microsoft Word and not a legal person. OpenAI is a legal person, though.

-The servers running ChatGPT are owned by OpenAI.

-OpenAI willingly did business with this teenager, letting him set up an account in exchange for money. This business is a service under the control of OpenAI, not a product like a knife or gun. OpenAI intended to transmit speech to this teenager.

-A person can be liable (civilly? criminally?) for inciting another person's suicide. It is not protected speech to persuade someone into suicide.

-OpenAI produced some illegal speech and sent it to a suicidal teenager, who then committed suicide.

If Sam Altman stabbed the kid to death, it wouldn't matter if he did it on accident. Sam Altman would be at fault. You wouldn't sue or arrest the knife he used to do the deed.

Any lawyers here who can correct me, seeing as I am not one? It seems clear as day to me that OpenAI/Sam Altman directly encouraged a child to kill themselves.

replies(6): >>45033677 #>>45035753 #>>45036119 #>>45036667 #>>45036842 #>>45038959 #
worldsavior ◴[] No.45036842[source]
You could also blame Wikipedia for providing suicidal methods for historic reasons or other. Whoever roams the internet is at it's own responsibility.

Of course OpenAI is at fault here also, but this is a fight that will never end, and without any seriously valid justification. Just like AI is sometimes bad at coding, same for psychology and other areas where you double check AI.

replies(2): >>45036976 #>>45042953 #
esalman ◴[] No.45042953[source]
I am parent to a 4yo. I am also fairly well versed in development and usage of AI and LLM.

When I want an LLM to do something but it won't, I know various ways to bypass that.

If my son is using AI, which he probably will when he is close to middle school age anyway, I will take care to teach him how to use AI responsibly. He'll be smart enough to know how to bypass, but I'll do my best to teach him when to bypass and when not to bypass. That is if the current state of the art and also AI legislation etc. holds.

But I'm just one parent, I have an engineering degree, a PhD, coding, mathematical, and analytical skills. I'm a very small minority. The vast majority of parents out there do not know what's going to hit there kids and how, or they will have very skewed idea about it.

OpenAI should have been the one here to guide a child not to bypass AI and use it responsibily. They did not. No matter how anyone twist the facts, that's the reality here and the child died.

replies(1): >>45047552 #
1. ivape ◴[] No.45047552[source]
This is just an escalation. We didn’t know what would happen if we let the kids get the internet, tv, video games, and porn. We can’t even assess it in 2025 because it’s all normalized. In a few years, AI will be normalized too. Things will keep escalating and we won’t know because of the normalization. Only in the briefest moments like today, where we’re just before everything changes.