←back to thread

443 points jaredwiener | 1 comments | | HN request time: 0.207s | source
Show context
TillE ◴[] No.45029541[source]
I would've thought that explicit discussion of suicide is one of those topics that chatbots will absolutely refuse to engage with. Like as soon as people started talking about using LLMs as therapists, it's really easy to see how that can go wrong.
replies(5): >>45029762 #>>45031044 #>>45032386 #>>45032474 #>>45047012 #
techpineapple ◴[] No.45031044[source]
Apparently ChatGPT told the kid, that it wasn’t allowed to talk about suicide unless it was for the purposes of writing fiction or otherwise world building.
replies(3): >>45032445 #>>45032562 #>>45034474 #
adzm ◴[] No.45032445[source]
However it then explicitly says things like not leaving the noose out for someone to find and stop him. Sounds like it did initially hesitate and he said it was for a character, but later conversations are obviously personal.
replies(2): >>45032490 #>>45043938 #
1. techpineapple ◴[] No.45032490[source]
Yeah, I wonder if it maintained the original answer in it's context, so it started talking more straightforwardly?

But yeah, my point was that it basically told the kid how to jailbreak itself.