←back to thread

443 points jaredwiener | 4 comments | | HN request time: 0.001s | source
1. rossant ◴[] No.45032652[source]
Should ChatGPT have the ability to alert a hotline or emergency services when it detects a user is about to commit suicide? Or would it open a can of worms?
replies(1): >>45033384 #
2. causal ◴[] No.45033384[source]
I don't think we should have to choose between "sycophantic coddling" and "alert the authorities". Surely there's a middle ground where it should be able to point the user to help and then refuse to participate further.

Of course jailbreaking via things like roleplay might still be possible, but at the point I don't really blame the model if the user is engineering the outcome.

replies(1): >>45034305 #
3. lawlessone ◴[] No.45034305[source]
Maybe add a simple tool for it to call, to notify a human that can determine if there is an issue.
replies(1): >>45034527 #
4. myvoiceismypass ◴[] No.45034527{3}[source]
We cannot even successfully prevent SWATing here in the states and that process is full of human involvement.