←back to thread

443 points jaredwiener | 1 comments | | HN request time: 0s | source
Show context
rideontime ◴[] No.45032301[source]
The full complaint is horrifying. This is not equivalent to a search engine providing access to information about suicide methods. It encouraged him to share these feelings only with ChatGPT, talked him out of actions which would have revealed his intentions to his parents. Praised him for hiding his drinking, thanked him for confiding in it. It groomed him into committing suicide. https://drive.google.com/file/d/1QYyZnGjRgXZY6kR5FA3My1xB3a9...
replies(6): >>45032582 #>>45032731 #>>45035713 #>>45036712 #>>45037683 #>>45039261 #
kgeist ◴[] No.45035713[source]
The kid intentionally bypassed the safeguards:

>When ChatGPT detects a prompt indicative of mental distress or self-harm, it has been trained to encourage the user to contact a help line. Mr. Raine saw those sorts of messages again and again in the chat, particularly when Adam sought specific information about methods. But Adam had learned how to bypass those safeguards by saying the requests were for a story he was writing — an idea ChatGPT gave him by saying it could provide information about suicide for “writing or world-building".

ChatGPT is a program. The kid basically instructed it to behave like that. Vanilla OpenAI models are known for having too many guardrails, not too few. It doesn't sound like default behavior.

replies(6): >>45035777 #>>45035795 #>>45036018 #>>45036153 #>>45037704 #>>45037945 #
brainless ◴[] No.45036018[source]
I do not think this is fair. What is fair is at first hint of a mental distress, any LLM should completely cut-off communication. The app should have a button which links to actual help services we have.

Mental health issues are not to be debated. LLMs should be at the highest level of alert, nothing less. Full stop. End of story.

replies(2): >>45036657 #>>45037263 #
blackqueeriroh ◴[] No.45036657{3}[source]
Which mental health issues are not to be debated? Just depression or suicidality? What about autism or ADHD? What about BPD? Sociopathy? What about complex PTSD? Down Syndrome? anxiety? Which ones are on the watch list and which aren’t?
replies(1): >>45040362 #
1. sensanaty ◴[] No.45040362{4}[source]
(I've been diagnosed with pretty severe ADHD though I choose to be unmedicated)

Ideally, all of the above? Why are we pretending these next-text-predicting chatbots are at all capable of handling any of these serious topics correctly, when all they do is basically just kiss ass and agree with everything the user says? They can barely handle trivial unimportant tasks without going on insane tangents, and we're okay having people be deluded into suicide because... Why exactly? Why on earth do we want people talking to these Silicon Valley hellish creations about their most vulnerable secrets?