When someone uses a tool and surrenders their decision making power to the tool, shouldn't they be the ones solely responsible?
The liability culture only gives lawyers more money and depresses innovation. Responsibility is a thing.
When someone uses a tool and surrenders their decision making power to the tool, shouldn't they be the ones solely responsible?
The liability culture only gives lawyers more money and depresses innovation. Responsibility is a thing.
I think it's really, really blurry.
I think the mom's reaction of "ChatGPT killed my son" is ridiculous: no, your son killed himself. ChatGPT facilitated it, based on questions it was asked by your son, but your son did it. And it sounds like he even tried to get a reaction out of you by "showing" you the rope marks on his neck, but you didn't pay attention. I bet you feel guilty about that. I would too, in your position. But foisting your responsibility onto a computer program is not the way to deal with it. (Not placing blame here; everybody misses things, and no one is "on" 100% of the time.)
> Responsibility is a thing.
Does OpenAI (etc.) have a responsibility to reduce the risk of people using their products in ways like this? Legally, maybe not, but I would argue that they absolutely have a moral and ethical responsibility to do so. Hell, this was pretty basic ethics taught in my engineering classes from 25 years ago. Based on the chat excerpts NYT reprinted, it seems like these conversations should have tripped a system prompt that either cut off the conversations entirely, or notified someone that something was very, very wrong.
Why not? I’m not trying to inflame this further, I’m genuinely interested in your logic for this statement.
That whole paragraph is quite something. I wonder what you’d do if you were given the opportunity to repeat those words in front of the parents. I suspect (and hope) some empathy might kick in and you’d realise the pedantry and shilling for the billion dollar company selling a statistical word generator as if it were a god isn’t the response society needs.
Your post read like the real-life version of that dark humour joke:
> Actually, the past tense is “hanged”, as in “he hanged himself”. Sorry about your Dad, though.
Of course you can, and it’s genuinely worrying you so vehemently believe you can’t. That’s what support groups are—strangers in similar circumstances being empathetic to each other to get through a hurtful situation.
“I told you once that I was searching for the nature of evil. I think I’ve come close to defining it: a lack of empathy. It’s the one characteristic that connects all the defendants. A genuine incapacity to feel with their fellow man. Evil, I think, is the absence of empathy.” — Gustave Gilbert, author of “Nuremberg Diary”, an account of interviews conducted during the Nuremberg trials of high-ranking Nazi leaders.
> Empathy like all emotions require effort and cognitive load, and without things being mutual you will eventually slowly become drained, bitter and resentful because of empathy fatigue.
Do you have a source study or is this anecdotal, or speculative? Again, genuinely interested, as it’s a claim I see often, but haven’t been able to pin down.
(While attempting not to virtue-signal) I personally find it easier to empathize with people I don’t know, often, which is why I’m interested. I don’t expect mutual empathy from someone who doesn’t know who I am.
Equally, I try not to consume much news media, as the ‘drain’ I experience feels as though it comes from a place of empathy when I see sad things. So I think I experience a version of what you’re suggesting, and I’m interested in why our language is quite oppositional despite this.
It's like making therapists liable for people committing suicide or for people with eating disorders committing suicide indirectly. What ends up happening when you do is therapists avoiding suicidal people like the plague, suicidial people get far less help and more people commit suicide, not less. That is the essense of the harms of safetyism.
You might not think that is real, but I know many therapists via family ties and handling suicdial people is an issue that comes up constantly. Many do try to filter them out because they don't even want to be dragged into a lawsuit that they would win. This is literally reality today.
Doing this with AI will result in kids being banned from AI apps, or forced to have their parents access and read all AI chats. This will drive them into discord groups of teens who egg each other on to commit suicide and now you can't do anything about it, because private communication mediums of just non-profit humans have far more human rights against censorship and teens are amazing at avoiding being supervised. At least with AI models you have a chance to develop something that actually could figured it out for once and solve the moderation balance.
For example, you’re extrapolating one family making a complaint to a world of “full safetyism / scapegoating”. You also claim it would cause “significantly less safety and far more harm”, which you don’t know. In that same vein you extrapolate into “kids being banned from AI apps” or “forced” (forced!) “to have their parents access and read all AI chats”. Then you go full on into how that will drive them into Discord servers where they’ll “egg each other on to commit suicide” as if that’s the one thing teenagers on Discord do.
And on, and on. I hope it’s clear why I found it pointless to address your specific points. I’m not being figurative when I say I’d have to reproduce your own comment in full.