←back to thread

443 points jaredwiener | 3 comments | | HN request time: 0s | source
Show context
podgietaru ◴[] No.45032841[source]
I have looked suicide in the eyes before. And reading the case file for this is absolutely horrific. He wanted help. He was heading in the direction of help, and he was stopped from getting it.

He wanted his parents to find out about his plan. I know this feeling. It is the clawing feeling of knowing that you want to live, despite feeling like you want to die.

We are living in such a horrific moment. We need these things to be legislated. Punished. We need to stop treating them as magic. They had the tools to prevent this. They had the tools to stop the conversation. To steer the user into helpful avenues.

When I was suicidal, I googled methods. And I got the number of a local hotline. And I rang it. And a kind man talked me down. And it potentially saved my life. And I am happier, now. I live a worthwhile life, now.

But at my lowest.. An AI Model designed to match my tone and be sycophantic to my every whim. It would have killed me.

replies(18): >>45032890 #>>45035840 #>>45035988 #>>45036257 #>>45036299 #>>45036318 #>>45036341 #>>45036513 #>>45037567 #>>45037905 #>>45038285 #>>45038393 #>>45039004 #>>45047014 #>>45048457 #>>45048890 #>>45052019 #>>45066389 #
stavros ◴[] No.45036513[source]
> When ChatGPT detects a prompt indicative of mental distress or self-harm, it has been trained to encourage the user to contact a help line. Mr. Raine saw those sorts of messages again and again in the chat, particularly when Adam sought specific information about methods. But Adam had learned how to bypass those safeguards by saying the requests were for a story he was writing.
replies(6): >>45036630 #>>45037615 #>>45038613 #>>45043686 #>>45045543 #>>45046708 #
sn0wleppard ◴[] No.45036630[source]
Nice place to cut the quote there

> [...] — an idea ChatGPT gave him by saying it could provide information about suicide for “writing or world-building.”

replies(4): >>45036651 #>>45036677 #>>45036813 #>>45036920 #
llmthrow0827 ◴[] No.45036813[source]
Incredible. ChatGPT is a black box includes a suicide instruction and encouragement bot. OpenAI should be treated as a company that has created such and let it into the hands of children.
replies(3): >>45037305 #>>45037929 #>>45037953 #
behringer ◴[] No.45037929{3}[source]
Oh won't somebody please think of the children?!
replies(1): >>45038322 #
AlecSchueler ◴[] No.45038322{4}[source]
So do we just trot out the same tired lines every time and never think of the social fallout of our actions?
replies(2): >>45038416 #>>45039175 #
scotty79 ◴[] No.45039175{4}[source]
We kinda do that blaming every new media for particular teens suicide.

Some teens are suicidal. They always have been. When you are a teen your brain undergoes traumatic transformation. Not everyone gets to the other side safely. Same as with many other transformations and diseases. Yet every time new medium is found adjacent to some particular suicide we repeat the same tired line that creator of this medium should be to blame and should be punished and banned.

And we are doing that while happily ignoring how existence of things like Facebook or Instagram provably degraded mental health and raised suicidality of entire generations of teenagers. However they mostly get a pass because we can't point a finger convincingly enough for any specific case and say it was anything more than just interacting with peers.

replies(1): >>45043505 #
1. AlecSchueler ◴[] No.45043505{5}[source]
Except loads of us are talking about the dangers of social media and have been for the past ten years only to receive exactly the same hand waving and sarcastic responses as you see in this thread. Now the ultimate gaslighting of "the discussion didn't even happen."
replies(1): >>45048557 #
2. scotty79 ◴[] No.45048557[source]
Was Facebook sued for teen suicide? Did it lose or at least settled?

Facebook is not the same as ai chat. Facebook influence on mental health is negative and visible in research. The jury is still out on AI but it might as well turn out it has huge net positive effect on well being.

replies(1): >>45080984 #
3. AlecSchueler ◴[] No.45080984[source]
Net negative or net positive doesn't really matter. If there are aspects of it that are causing children to kill themselves then we should be able to discuss that without people rolling their eyes and saying "yeah yeah think of the children."

We can have the benefits while also trying to limit the harms.