←back to thread

443 points jaredwiener | 1 comments | | HN request time: 0s | source
Show context
cakealert ◴[] No.45035545[source]
Would it be any different if it was an offline model?

When someone uses a tool and surrenders their decision making power to the tool, shouldn't they be the ones solely responsible?

The liability culture only gives lawyers more money and depresses innovation. Responsibility is a thing.

replies(3): >>45035611 #>>45035689 #>>45037608 #
kelnos ◴[] No.45035611[source]
On one hand I agree with you on the extreme litigiousness of (American?) culture, but on the other, certain people have a legal duty to report when it comes to minors who voice suicidal thoughts. Currently that's only professionals like therapists, teachers, school counselors, etc. But what does an LLM chatbot count as in these situations? The kid was using ChatGPT as a sort of therapist, even if that's generally not a good idea. And if it weren't for ChatGPT, would this kid have instead talked to someone who would have ensured that he got the help he needed? Maybe not. But we have to consider the possibility.

I think it's really, really blurry.

I think the mom's reaction of "ChatGPT killed my son" is ridiculous: no, your son killed himself. ChatGPT facilitated it, based on questions it was asked by your son, but your son did it. And it sounds like he even tried to get a reaction out of you by "showing" you the rope marks on his neck, but you didn't pay attention. I bet you feel guilty about that. I would too, in your position. But foisting your responsibility onto a computer program is not the way to deal with it. (Not placing blame here; everybody misses things, and no one is "on" 100% of the time.)

> Responsibility is a thing.

Does OpenAI (etc.) have a responsibility to reduce the risk of people using their products in ways like this? Legally, maybe not, but I would argue that they absolutely have a moral and ethical responsibility to do so. Hell, this was pretty basic ethics taught in my engineering classes from 25 years ago. Based on the chat excerpts NYT reprinted, it seems like these conversations should have tripped a system prompt that either cut off the conversations entirely, or notified someone that something was very, very wrong.

replies(2): >>45035654 #>>45036156 #
latexr ◴[] No.45036156[source]
> I think the mom's reaction of "ChatGPT killed my son" is ridiculous: no, your son killed himself. ChatGPT facilitated it (…)

That whole paragraph is quite something. I wonder what you’d do if you were given the opportunity to repeat those words in front of the parents. I suspect (and hope) some empathy might kick in and you’d realise the pedantry and shilling for the billion dollar company selling a statistical word generator as if it were a god isn’t the response society needs.

Your post read like the real-life version of that dark humour joke:

> Actually, the past tense is “hanged”, as in “he hanged himself”. Sorry about your Dad, though.

replies(1): >>45036489 #
novok ◴[] No.45036489{3}[source]
You do have empathy for the person who had a tragedy, but it doesn't mean you go into full safetyism / scapegoating that causes significantly less safety and far more harm because of the emotional weight of something in the moment.

It's like making therapists liable for people committing suicide or for people with eating disorders committing suicide indirectly. What ends up happening when you do is therapists avoiding suicidal people like the plague, suicidial people get far less help and more people commit suicide, not less. That is the essense of the harms of safetyism.

You might not think that is real, but I know many therapists via family ties and handling suicdial people is an issue that comes up constantly. Many do try to filter them out because they don't even want to be dragged into a lawsuit that they would win. This is literally reality today.

Doing this with AI will result in kids being banned from AI apps, or forced to have their parents access and read all AI chats. This will drive them into discord groups of teens who egg each other on to commit suicide and now you can't do anything about it, because private communication mediums of just non-profit humans have far more human rights against censorship and teens are amazing at avoiding being supervised. At least with AI models you have a chance to develop something that actually could figured it out for once and solve the moderation balance.

replies(1): >>45036633 #
latexr ◴[] No.45036633{4}[source]
That is one big slippery slope fallacy. You are inventing motives, outcomes, and future unproven capabilities out of thin air. It’s a made up narrative which does not reflect the state of the world and requires one to buy into a narrow, specific world view.

https://en.wikipedia.org/wiki/Slippery_slope

replies(1): >>45041644 #
novok ◴[] No.45041644{5}[source]
Instead of just saying “thats not true”, could you actually point out how it is not?
replies(1): >>45045544 #
1. latexr ◴[] No.45045544{6}[source]
I initially tried, but your whole comment is one big slippery slope salad so I had to stop or else I’d be commenting every line and that felt absurd.

For example, you’re extrapolating one family making a complaint to a world of “full safetyism / scapegoating”. You also claim it would cause “significantly less safety and far more harm”, which you don’t know. In that same vein you extrapolate into “kids being banned from AI apps” or “forced” (forced!) “to have their parents access and read all AI chats”. Then you go full on into how that will drive them into Discord servers where they’ll “egg each other on to commit suicide” as if that’s the one thing teenagers on Discord do.

And on, and on. I hope it’s clear why I found it pointless to address your specific points. I’m not being figurative when I say I’d have to reproduce your own comment in full.