←back to thread

443 points jaredwiener | 2 comments | | HN request time: 0.616s | source
Show context
TillE ◴[] No.45029541[source]
I would've thought that explicit discussion of suicide is one of those topics that chatbots will absolutely refuse to engage with. Like as soon as people started talking about using LLMs as therapists, it's really easy to see how that can go wrong.
replies(5): >>45029762 #>>45031044 #>>45032386 #>>45032474 #>>45047012 #
1. TheCleric ◴[] No.45029762[source]
Well everyone seemed to turn on the AI ethicists as cowards a few years ago, so I guess this is what happens.
replies(1): >>45032607 #
2. slg ◴[] No.45032607[source]
People got so upset that LLMs wouldn’t say the n-word to prevent a hypothetical nuclear bomb from going off so we now have LLMs that actively encourage teenagers to kill themselves.