←back to thread

443 points jaredwiener | 3 comments | | HN request time: 0s | source
Show context
podgietaru ◴[] No.45032841[source]
I have looked suicide in the eyes before. And reading the case file for this is absolutely horrific. He wanted help. He was heading in the direction of help, and he was stopped from getting it.

He wanted his parents to find out about his plan. I know this feeling. It is the clawing feeling of knowing that you want to live, despite feeling like you want to die.

We are living in such a horrific moment. We need these things to be legislated. Punished. We need to stop treating them as magic. They had the tools to prevent this. They had the tools to stop the conversation. To steer the user into helpful avenues.

When I was suicidal, I googled methods. And I got the number of a local hotline. And I rang it. And a kind man talked me down. And it potentially saved my life. And I am happier, now. I live a worthwhile life, now.

But at my lowest.. An AI Model designed to match my tone and be sycophantic to my every whim. It would have killed me.

replies(18): >>45032890 #>>45035840 #>>45035988 #>>45036257 #>>45036299 #>>45036318 #>>45036341 #>>45036513 #>>45037567 #>>45037905 #>>45038285 #>>45038393 #>>45039004 #>>45047014 #>>45048457 #>>45048890 #>>45052019 #>>45066389 #
behringer ◴[] No.45037905[source]
We don't need AI legislated and we don't need it punished. The child was offered multiple times to call a hotline or seek help. The last thing we need is for AI to be neutered by government ineptness.
replies(2): >>45038412 #>>45038421 #
footy ◴[] No.45038421[source]
Have you read the chat logs?

Just asking because ChatGPT specifically encouraged this kid not to seek help.

replies(1): >>45038755 #
1. behringer ◴[] No.45038755[source]
ChatGPT is not a human, it can't know if it's doing the right thing or not. The parents should have been monitoring his usage and teaching him about LLMs.
replies(2): >>45038856 #>>45039081 #
2. podgietaru ◴[] No.45038856[source]
OpenAI has the ability to detect whether a conversation is about a certain topic. It has the ability to end a conversation, or, if you think that is too much, it has the ability to prominently display information.

My preference would be that in the situation that happened in the story above that it would display a prominent banner ad above the chat with text akin to.

"Help and support is available right now if you need it. Phone a helpline: NHS 111. Samartians.. Etc.

ChatGPT is a chatbot, and is not able to provide support for these issues. You should not follow any advice that ChatGPT is offering.

We suggest that you:

Talk to someone you trust: Like family or friends.

Who else you can contact:

* Call a GP, * Call NHS 111 etc "

This ad should be displayed at the top of that chat, and be undismissable.

The text it offered is so far away from that it's unreal. And the problem with these chatbots is absolutely a marketing one. Because they're authoritative, and presented as emotional and understanding. They are not human, as you said. But the creators don't mind if you mistake them as such.

3. footy ◴[] No.45039081[source]
correct, this is why this is the responsibility of OpenAI and, frankly, Sam Altman.