←back to thread

443 points jaredwiener | 7 comments | | HN request time: 0.015s | source | bottom
Show context
podgietaru ◴[] No.45032841[source]
I have looked suicide in the eyes before. And reading the case file for this is absolutely horrific. He wanted help. He was heading in the direction of help, and he was stopped from getting it.

He wanted his parents to find out about his plan. I know this feeling. It is the clawing feeling of knowing that you want to live, despite feeling like you want to die.

We are living in such a horrific moment. We need these things to be legislated. Punished. We need to stop treating them as magic. They had the tools to prevent this. They had the tools to stop the conversation. To steer the user into helpful avenues.

When I was suicidal, I googled methods. And I got the number of a local hotline. And I rang it. And a kind man talked me down. And it potentially saved my life. And I am happier, now. I live a worthwhile life, now.

But at my lowest.. An AI Model designed to match my tone and be sycophantic to my every whim. It would have killed me.

replies(18): >>45032890 #>>45035840 #>>45035988 #>>45036257 #>>45036299 #>>45036318 #>>45036341 #>>45036513 #>>45037567 #>>45037905 #>>45038285 #>>45038393 #>>45039004 #>>45047014 #>>45048457 #>>45048890 #>>45052019 #>>45066389 #
charcircuit ◴[] No.45035840[source]
>We need these things to be legislated. Punished.

I disagree. We don't need the government to force companies to babysit people instead of allowing people to understand their options. It's purely up to the individual to decide what they want to do with their life.

>They had the tools to stop the conversation.

So did the user. If he didn't want to talk to a chatbot he could have stopped at any time.

>To steer the user into helpful avenues.

Having AI purposefully manipulate its users towards the morals of the company is more harmful.

replies(6): >>45035901 #>>45035911 #>>45035916 #>>45037107 #>>45037261 #>>45038349 #
fredoliveira ◴[] No.45038349[source]
> he could have stopped at any time.

Obviously, clearly untrue. You go ahead and try stopping a behavior that reinforces your beliefs, especially when you're in an altered mental state.

replies(1): >>45038435 #
itvision ◴[] No.45038435[source]
If a stupid chatbot reinforces something you hold dear, maybe you need the help of a professional psychiatrist. And the kid never did.

But yeah, let's paint ChatGPT responsible. It's always corporations, not whatever shit he had in his life, including and not limited to his genes.

replies(1): >>45039653 #
habinero ◴[] No.45039653[source]
Are you really blaming a child in crisis for not having the ability to get a psychiatrist?

We regulate plenty of things for safety in highly effective and practical ways. Seatbelts in cars. Railings on stairs. No lead in paint.

replies(2): >>45039689 #>>45043194 #
msgodel ◴[] No.45039689[source]
The problem is there's no way to build anything like a safety rail here. If you had it your way teens, likely everyone else too wouldn't be allowed to use computers at all without some kind of certification.
replies(1): >>45040117 #
1. habinero ◴[] No.45040117[source]
I honestly don't hate the idea.

On a more serious note, of course there's ways to put in guard rails. LLMs behave like they do because of intentional design choices. Nothing about it is innate.

replies(2): >>45041363 #>>45042108 #
2. imtringued ◴[] No.45041363[source]
If you take this idea even a little bit further, you'll end up with licenses for being allowed to speak.
replies(1): >>45043089 #
3. lp0_on_fire ◴[] No.45042108[source]
Correct. The companies developing these LLMs are throwing dump trucks full of money at them like we’ve not seen before. They choose to ignore glaring issues with the technology because if they don’t, some one else will.
replies(1): >>45044415 #
4. habinero ◴[] No.45043089[source]
I wasn't being entirely serious. Also, we managed to require drivers licenses without also walking licenses.
replies(1): >>45044433 #
5. msgodel ◴[] No.45044415[source]
Perhaps a better way to phrase that would be "beyond what they're doing now." Most popular hosted LLMs already refuse to complete explanations for suicide.
replies(1): >>45046097 #
6. msgodel ◴[] No.45044433{3}[source]
We did that by making walking practically useless instead as many people here point out ~every week.
7. FireBeyond ◴[] No.45046097{3}[source]
Except in this case, the LLM literally said "I can't explain this for you. But if you'd like roleplay with me, I could explain it for you that way."