←back to thread

443 points jaredwiener | 2 comments | | HN request time: 0.502s | source
Show context
podgietaru ◴[] No.45032841[source]
I have looked suicide in the eyes before. And reading the case file for this is absolutely horrific. He wanted help. He was heading in the direction of help, and he was stopped from getting it.

He wanted his parents to find out about his plan. I know this feeling. It is the clawing feeling of knowing that you want to live, despite feeling like you want to die.

We are living in such a horrific moment. We need these things to be legislated. Punished. We need to stop treating them as magic. They had the tools to prevent this. They had the tools to stop the conversation. To steer the user into helpful avenues.

When I was suicidal, I googled methods. And I got the number of a local hotline. And I rang it. And a kind man talked me down. And it potentially saved my life. And I am happier, now. I live a worthwhile life, now.

But at my lowest.. An AI Model designed to match my tone and be sycophantic to my every whim. It would have killed me.

replies(18): >>45032890 #>>45035840 #>>45035988 #>>45036257 #>>45036299 #>>45036318 #>>45036341 #>>45036513 #>>45037567 #>>45037905 #>>45038285 #>>45038393 #>>45039004 #>>45047014 #>>45048457 #>>45048890 #>>45052019 #>>45066389 #
fzeindl ◴[] No.45037567[source]
> An AI Model designed to match my tone and be sycophantic to my every whim. It would have killed me.

Matching tones and being sycophantic to every whims. Just like many really bad therapists. Only they are legally responsible if they cause a death, which makes them care (apart from compassion and morality).

The criminal justice system is also a system for preventing individuals who perform unwanted action from doing them again.

You can’t punish AI for messing up. You would need to pull it out of circulation on each major screw up, which isn’t financially feasible, and you would need to make it want to prevent that.

replies(3): >>45037930 #>>45037980 #>>45047370 #
podgietaru ◴[] No.45037930[source]
Take a step back and think about what the Model told that Teenager. It told him to specifically hide his behaviour from people who would have tried to prevent it and get him help.

There is no comparison to therapists. Because a therapist would NEVER do that unless wanting to cause harm.

replies(1): >>45038048 #
fzeindl ◴[] No.45038048[source]
> There is no comparison to therapists. Because a therapist would NEVER do that unless wanting to cause harm.

Some therapists ultimately might. It occurs that therapists were stripped of their licenses for leading abusive sects:

https://en.m.wikipedia.org/wiki/Center_for_Feeling_Therapy

replies(1): >>45038274 #
1. lionkor ◴[] No.45038274[source]
That's an edge case, this case is ChatGPT working as intended.
replies(1): >>45038484 #
2. fzeindl ◴[] No.45038484[source]
Exactly. That might be something interesting to think about. Humans make mistakes. LLMs make mistakes.

Yet for humans we have built a society which prevents these mistakes except in edge cases.

Would humans make these mistakes as often as LLMs if there would be no consequences?