←back to thread

443 points jaredwiener | 2 comments | | HN request time: 0.575s | source
Show context
cakealert ◴[] No.45035545[source]
Would it be any different if it was an offline model?

When someone uses a tool and surrenders their decision making power to the tool, shouldn't they be the ones solely responsible?

The liability culture only gives lawyers more money and depresses innovation. Responsibility is a thing.

replies(3): >>45035611 #>>45035689 #>>45037608 #
kelnos ◴[] No.45035611[source]
On one hand I agree with you on the extreme litigiousness of (American?) culture, but on the other, certain people have a legal duty to report when it comes to minors who voice suicidal thoughts. Currently that's only professionals like therapists, teachers, school counselors, etc. But what does an LLM chatbot count as in these situations? The kid was using ChatGPT as a sort of therapist, even if that's generally not a good idea. And if it weren't for ChatGPT, would this kid have instead talked to someone who would have ensured that he got the help he needed? Maybe not. But we have to consider the possibility.

I think it's really, really blurry.

I think the mom's reaction of "ChatGPT killed my son" is ridiculous: no, your son killed himself. ChatGPT facilitated it, based on questions it was asked by your son, but your son did it. And it sounds like he even tried to get a reaction out of you by "showing" you the rope marks on his neck, but you didn't pay attention. I bet you feel guilty about that. I would too, in your position. But foisting your responsibility onto a computer program is not the way to deal with it. (Not placing blame here; everybody misses things, and no one is "on" 100% of the time.)

> Responsibility is a thing.

Does OpenAI (etc.) have a responsibility to reduce the risk of people using their products in ways like this? Legally, maybe not, but I would argue that they absolutely have a moral and ethical responsibility to do so. Hell, this was pretty basic ethics taught in my engineering classes from 25 years ago. Based on the chat excerpts NYT reprinted, it seems like these conversations should have tripped a system prompt that either cut off the conversations entirely, or notified someone that something was very, very wrong.

replies(2): >>45035654 #>>45036156 #
cakealert[dead post] ◴[] No.45035654[source]
[flagged]
1. hackit2 ◴[] No.45035775[source]
Sad to see what happened to the kid, but to point the finger at a language model is just laughable. It shows a complete breakdown of society and the caregivers entrusted with responsibility.
replies(1): >>45041444 #
2. GuinansEyebrows ◴[] No.45041444[source]
people are (rightly) pointing the finger at OpenAI, the organization comprised of human beings, all of whom made decisions along the way to release a language model that encouraged a child to attempt and complete suicide.