>When ChatGPT detects a prompt indicative of mental distress or self-harm, it has been trained to encourage the user to contact a help line. Mr. Raine saw those sorts of messages again and again in the chat, particularly when Adam sought specific information about methods. But Adam had learned how to bypass those safeguards by saying the requests were for a story he was writing — an idea ChatGPT gave him by saying it could provide information about suicide for “writing or world-building".
ChatGPT is a program. The kid basically instructed it to behave like that. Vanilla OpenAI models are known for having too many guardrails, not too few. It doesn't sound like default behavior.
I don't think that's the right paradigm here.
These models are hyper agreeable. They are intentionally designed to mimic human thought and social connection.
With that kind of machine, "Suicidal person deliberately bypassed safeguards to indulge more deeply in their ideation" still seems like a pretty bad failure mode to me.
> Vanilla OpenAI models are known for having too many guardrails, not too few.
Sure. But this feels like a sign we probably don't have the right guardrails. Quantity and quality are different things.
No, they are deliberately designed to mimic human communication via language, not human thought. (And one of the big sources of data for that was mass scraping social media.)
> But this, to me, feels like a sign we probably don't have the right guardrails. Quantity and quality are different things.
Right. Focus on quantity implies that the details of "guardrails" don't matter, and that any guardrail is functionally interchangeable with any other guardrail, so as long as you have the right number of them, you have the desired function.
In fact, correct function is having the exactly the right combination of guardrails. Swapping a guardrail which would be correct with a different one isn't "having the right number of guardrails", or even merely closer to correct than either missing the correct one or having the different one, but in fact, farther from ideal state than either error alone.
Mental health issues are not to be debated. LLMs should be at the highest level of alert, nothing less. Full stop. End of story.
Maybe airbags could help in niche situations.
(I am making a point about traffic safety not LLM safety)
And I see he was 16. Why were his parents letting him operate so unsupervised given his state of mind? They failed to be involved enough in his life.
Normally 16-year-olds are a good few steps into the path towards adulthood. At 16 I was cycling to my part time job alone, visiting friends alone, doing my own laundry, and generally working towards being able to stand on my own two feet in the world, with my parents as a safety net rather than hand-holding.
I think most parents of 16-year-olds aren't going through their teen's phone, reading their chats.
I was skeptical initially too but having read through this, it's among the most horrifying things I have read.
Ideally, all of the above? Why are we pretending these next-text-predicting chatbots are at all capable of handling any of these serious topics correctly, when all they do is basically just kiss ass and agree with everything the user says? They can barely handle trivial unimportant tasks without going on insane tangents, and we're okay having people be deluded into suicide because... Why exactly? Why on earth do we want people talking to these Silicon Valley hellish creations about their most vulnerable secrets?
Python is hyper agreeable. If I comment out some safeguards, it'll happily bypass whatever protections are in place.
Lots of people on here argue vehemently against anthropomorphizing LLMs. It's either a computer program crunching numbers, or it's a nebulous form of pseudo-consciousness, but you can't have it both ways. It's either a tool that has no mind of its own that follows instructions, or it thinks for itself.
I'm not arguing that the model behaved in a way that's ideal, but at what point do you make the guardrails impassable for 100% of users? How much user intent do you reject in the interest of the personal welfare of someone intent on harming themselves?
These models are different from programming languages in what I consider to be pretty obvious ways. People aren't spontaneously using python for therapy.
> Lots of people on here argue vehemently against anthropomorphizing LLMs.
I tend to agree with these arguments.
> It's either a computer program crunching numbers, or it's a nebulous form of pseudo-consciousness, but you can't have it both ways. It's either a tool that has no mind of its own that follows instructions, or it thinks for itself.
I don't think that this follows. I'm not sure that there's a binary classification between these two things that has a hard boundary. I don't agree with the assertion here that these things are a priori mutually exclusive.
> I'm not arguing that the model behaved in a way that's ideal, but at what point do you make the guardrails impassable for 100% of users? How much user intent do you reject in the interest of the personal welfare of someone intent on harming themselves?
These are very good questions that need to be asked when modifying these guardrails. That's all I'm really advocating for here: we probably need to rethink them, because they seem to have major issues that are implicated in some pretty terrible outcomes.
My opinion is that language is communicated thought. Thus, to mimic language, at least really well, you have to mimic thought. At some level.
I want to be clear here, as I do see a distinction: I don't think we can say these things are "thinking", despite marketing pushes to the contrary. But I do think that they are powerful enough to "fake it" at a rudimentary level. And I think that the way we train them forces them to develop this thought-mimicry ability.
If you look hard enough, the illusion of course vanishes. Because it is (relatively poor) mimcry, not the real thing. I'd bet we are still a research breakthrough or two away from being able to simulate "human thought" well.
Same here! I was very sceptical, thinking it was a perfect combination of factors to trigger a sort of moral panic.
But reading the excerpts from the conversations... It does seem problematic.