←back to thread

443 points jaredwiener | 1 comments | | HN request time: 0.244s | source
Show context
podgietaru ◴[] No.45032841[source]
I have looked suicide in the eyes before. And reading the case file for this is absolutely horrific. He wanted help. He was heading in the direction of help, and he was stopped from getting it.

He wanted his parents to find out about his plan. I know this feeling. It is the clawing feeling of knowing that you want to live, despite feeling like you want to die.

We are living in such a horrific moment. We need these things to be legislated. Punished. We need to stop treating them as magic. They had the tools to prevent this. They had the tools to stop the conversation. To steer the user into helpful avenues.

When I was suicidal, I googled methods. And I got the number of a local hotline. And I rang it. And a kind man talked me down. And it potentially saved my life. And I am happier, now. I live a worthwhile life, now.

But at my lowest.. An AI Model designed to match my tone and be sycophantic to my every whim. It would have killed me.

replies(18): >>45032890 #>>45035840 #>>45035988 #>>45036257 #>>45036299 #>>45036318 #>>45036341 #>>45036513 #>>45037567 #>>45037905 #>>45038285 #>>45038393 #>>45039004 #>>45047014 #>>45048457 #>>45048890 #>>45052019 #>>45066389 #
charcircuit ◴[] No.45035840[source]
>We need these things to be legislated. Punished.

I disagree. We don't need the government to force companies to babysit people instead of allowing people to understand their options. It's purely up to the individual to decide what they want to do with their life.

>They had the tools to stop the conversation.

So did the user. If he didn't want to talk to a chatbot he could have stopped at any time.

>To steer the user into helpful avenues.

Having AI purposefully manipulate its users towards the morals of the company is more harmful.

replies(6): >>45035901 #>>45035911 #>>45035916 #>>45037107 #>>45037261 #>>45038349 #
luisfmh ◴[] No.45035901[source]
So people that look to chatgpt for answers and help (as they've been programmed to do with all the marketing and capabilities from openai) should just die because they looked to chatgpt for an answer instead of google or their local suicide helpline? That doesn't seem reasonable, but it sounds to me like what you're saying.

> So did the user. If he didn't want to talk to a chatbot he could have stopped at any time. This sounds similar to when people tell depressed people, just stop being sad.

IMO if a company is going to claim and release some pretty disruptive and unexplored capabilities through their product, they should at least have to make it safe. You put a safety railing because people could trip or slip. I don't think a mistake that small should be end in death.

replies(2): >>45036110 #>>45037493 #
charcircuit ◴[] No.45037493[source]
Firstly, people don't "just die" by talking to a chatbot.

Secondly, if someone wants to die then I am saying it is reasonable for them to die.

replies(5): >>45037771 #>>45037809 #>>45037957 #>>45038245 #>>45038705 #
unnamed76ri ◴[] No.45037957[source]
The thing about depression and suicidal thoughts is that they lie to you that things will never get better than where they are right now.

So someone wanting to die at any given moment, might not feel that way at any given moment in the future. I know I wouldn’t want any of my family members to make such a permanent choice to temporary problems.

replies(2): >>45038908 #>>45046463 #
1. podgietaru ◴[] No.45038908[source]
1000% As I said in my comment. I never thought I'd be better. I am. I am happy and I live a worthwhile life.

In the throws of intense depression it's hard to even wake up. The idea that I was acting in my right mind, and was able to make a decision like that is insane to me.