←back to thread

443 points jaredwiener | 1 comments | | HN request time: 0.203s | source
Show context
podgietaru ◴[] No.45032756[source]
If I google something about suicide, I get an immediate notification telling me that life is worth living, and giving me information about my local suicide prevention hotline.

If I ask certain AI models about controversial topics, it'll stop responding.

AI models can easily detect topics, and it could have easily responded with generic advice about contacting people close to them, or ringing one of these hotlines.

This is by design. They want to be able to have the "AI as my therapist" use-case in their back pocket.

This was easily preventable. They looked away on purpose.

replies(6): >>45032868 #>>45033244 #>>45035645 #>>45036047 #>>45036215 #>>45038528 #
nradov ◴[] No.45032868[source]
I agree with that to an extent, but how far should the AI model developers go with that? Like if I ask for advice on, let's say, making custom chef's knives then should the AI give me advice not to stab people? Who decides where to draw the line?
replies(3): >>45032895 #>>45032943 #>>45035661 #
1. podgietaru ◴[] No.45032895[source]
Further than they went. Google search results hide advice on how to commit suicide, and point towards more helpful things.

He was talking EXPLICITLY about killing himself.