>When ChatGPT detects a prompt indicative of mental distress or self-harm, it has been trained to encourage the user to contact a help line. Mr. Raine saw those sorts of messages again and again in the chat, particularly when Adam sought specific information about methods. But Adam had learned how to bypass those safeguards by saying the requests were for a story he was writing — an idea ChatGPT gave him by saying it could provide information about suicide for “writing or world-building".
ChatGPT is a program. The kid basically instructed it to behave like that. Vanilla OpenAI models are known for having too many guardrails, not too few. It doesn't sound like default behavior.
I was skeptical initially too but having read through this, it's among the most horrifying things I have read.
Same here! I was very sceptical, thinking it was a perfect combination of factors to trigger a sort of moral panic.
But reading the excerpts from the conversations... It does seem problematic.