←back to thread

443 points jaredwiener | 2 comments | | HN request time: 0s | source
Show context
podgietaru ◴[] No.45032841[source]
I have looked suicide in the eyes before. And reading the case file for this is absolutely horrific. He wanted help. He was heading in the direction of help, and he was stopped from getting it.

He wanted his parents to find out about his plan. I know this feeling. It is the clawing feeling of knowing that you want to live, despite feeling like you want to die.

We are living in such a horrific moment. We need these things to be legislated. Punished. We need to stop treating them as magic. They had the tools to prevent this. They had the tools to stop the conversation. To steer the user into helpful avenues.

When I was suicidal, I googled methods. And I got the number of a local hotline. And I rang it. And a kind man talked me down. And it potentially saved my life. And I am happier, now. I live a worthwhile life, now.

But at my lowest.. An AI Model designed to match my tone and be sycophantic to my every whim. It would have killed me.

replies(18): >>45032890 #>>45035840 #>>45035988 #>>45036257 #>>45036299 #>>45036318 #>>45036341 #>>45036513 #>>45037567 #>>45037905 #>>45038285 #>>45038393 #>>45039004 #>>45047014 #>>45048457 #>>45048890 #>>45052019 #>>45066389 #
stavros ◴[] No.45036513[source]
> When ChatGPT detects a prompt indicative of mental distress or self-harm, it has been trained to encourage the user to contact a help line. Mr. Raine saw those sorts of messages again and again in the chat, particularly when Adam sought specific information about methods. But Adam had learned how to bypass those safeguards by saying the requests were for a story he was writing.
replies(6): >>45036630 #>>45037615 #>>45038613 #>>45043686 #>>45045543 #>>45046708 #
sn0wleppard ◴[] No.45036630[source]
Nice place to cut the quote there

> [...] — an idea ChatGPT gave him by saying it could provide information about suicide for “writing or world-building.”

replies(4): >>45036651 #>>45036677 #>>45036813 #>>45036920 #
muzani ◴[] No.45036677[source]
Yup, one of the huge flaws I saw in GPT-5 is it will constantly say things like "I have to stop you here. I can't do what you're requesting. However, I can roleplay or help you with research with that. Would you like to do that?"
replies(3): >>45036805 #>>45037418 #>>45050649 #
kouteiheika ◴[] No.45036805{3}[source]
It's not a flaw. It's a tradeoff. There are valid uses for models which are uncensored and will do whatever you ask of them, and there are valid uses for models which are censored and will refuse anything remotely controversial.
replies(4): >>45037210 #>>45037998 #>>45038871 #>>45038889 #
1. agumonkey ◴[] No.45037998{4}[source]
Reminds me of trading apps. In the end all risky situations will be handled by a few popups saying "you understand that role playing about suicidal or harmful topics cam lead to accidents and/or death and this is not the platform responsibility, to continue check if you agree [ ]"
replies(1): >>45040767 #
2. imtringued ◴[] No.45040767[source]
It reminds me of gray market capital investments. They are actually quite regulated, and the contracts are only valid if the investor is fully aware of the risks associated with the investment.

In practice the providers sprinkle a handful of warning messages, akin to the California cancer label and call it a day.

Of course this leaves judges unconvinced and the contract will be redeclared as a loan, which means that the provider was illegally operating as a bank without a banking license, which is a much more serious violation than scamming someone out of $5000.