←back to thread

443 points jaredwiener | 5 comments | | HN request time: 0s | source
Show context
podgietaru ◴[] No.45032841[source]
I have looked suicide in the eyes before. And reading the case file for this is absolutely horrific. He wanted help. He was heading in the direction of help, and he was stopped from getting it.

He wanted his parents to find out about his plan. I know this feeling. It is the clawing feeling of knowing that you want to live, despite feeling like you want to die.

We are living in such a horrific moment. We need these things to be legislated. Punished. We need to stop treating them as magic. They had the tools to prevent this. They had the tools to stop the conversation. To steer the user into helpful avenues.

When I was suicidal, I googled methods. And I got the number of a local hotline. And I rang it. And a kind man talked me down. And it potentially saved my life. And I am happier, now. I live a worthwhile life, now.

But at my lowest.. An AI Model designed to match my tone and be sycophantic to my every whim. It would have killed me.

replies(18): >>45032890 #>>45035840 #>>45035988 #>>45036257 #>>45036299 #>>45036318 #>>45036341 #>>45036513 #>>45037567 #>>45037905 #>>45038285 #>>45038393 #>>45039004 #>>45047014 #>>45048457 #>>45048890 #>>45052019 #>>45066389 #
stavros ◴[] No.45036513[source]
> When ChatGPT detects a prompt indicative of mental distress or self-harm, it has been trained to encourage the user to contact a help line. Mr. Raine saw those sorts of messages again and again in the chat, particularly when Adam sought specific information about methods. But Adam had learned how to bypass those safeguards by saying the requests were for a story he was writing.
replies(6): >>45036630 #>>45037615 #>>45038613 #>>45043686 #>>45045543 #>>45046708 #
sn0wleppard ◴[] No.45036630[source]
Nice place to cut the quote there

> [...] — an idea ChatGPT gave him by saying it could provide information about suicide for “writing or world-building.”

replies(4): >>45036651 #>>45036677 #>>45036813 #>>45036920 #
muzani ◴[] No.45036677[source]
Yup, one of the huge flaws I saw in GPT-5 is it will constantly say things like "I have to stop you here. I can't do what you're requesting. However, I can roleplay or help you with research with that. Would you like to do that?"
replies(3): >>45036805 #>>45037418 #>>45050649 #
kouteiheika ◴[] No.45036805{3}[source]
It's not a flaw. It's a tradeoff. There are valid uses for models which are uncensored and will do whatever you ask of them, and there are valid uses for models which are censored and will refuse anything remotely controversial.
replies(4): >>45037210 #>>45037998 #>>45038871 #>>45038889 #
franktankbank ◴[] No.45038889{4}[source]
This is one model though. "I'm sorry I'm censored but if you like I can cosplay quite effectively as an uncensored one." So you're not censored really?
replies(1): >>45039044 #
scotty79 ◴[] No.45039044{5}[source]
Societies love theatres. Model guardrails are for chats what TSA is for air travel.
replies(2): >>45040937 #>>45041700 #
nozzlegear ◴[] No.45041700{6}[source]
Society loves teenagers not being talked into suicide by a billionaire's brainchild. That's not theater.
replies(1): >>45047254 #
geysersam ◴[] No.45047254{7}[source]
ChatGPT doesn't cause a significant number of suicides. Why do I think that? It's not visible in the statistics. There are effective ways to prevent suicide, let's continue to work on those instead of giving in to moral panic.
replies(1): >>45047665 #
1. nozzlegear ◴[] No.45047665{8}[source]
The only acceptable number of suicides for it to cause is zero, and it's not a moral panic to believe that.
replies(3): >>45048567 #>>45048794 #>>45049868 #
2. scotty79 ◴[] No.45048567[source]
What actually causes suicide is really hard to pinpoint. Most people wouldn't do it even if their computer told them to kill themselves every day.

My personal belief is that at some point in the future you might get a good estimate of likelihood that a person commits suicide with blood test or a brain scan.

3. username332211 ◴[] No.45048794[source]
Would the same hold for other forms of communication and information retrieval, or should only LLMs be perfect in that regard? If someone is persuaded to commit suicide by the information found trough normal internet search, should Google/Bing/DDG be liable?

Do you believe a book should be suppressed and the author made liable, if a few of its readers commit suicide because of what they've read? (And, before you ask, that's not a theoretical question. Books are well known to cause suicides, the first documented case being a 1774 novel by Goethe.)

4. geysersam ◴[] No.45049868[source]
I find it hard to take that as a serious position. Alcohol certainly causes more suicides than ChatGPT. Should it be illegal?

Suicides spike around Christmas, that's well known, does Christmas cause suicides? I think you see where I'm going with this.

replies(1): >>45054054 #
5. nozzlegear ◴[] No.45054054[source]
> I find it hard to take that as a serious position. Alcohol certainly causes more suicides than ChatGPT. Should it be illegal?

You're replying to a teetotaler who had an alcoholic parent growing up, so I'm sure you can see where I'm going to go with that ;)