←back to thread

443 points jaredwiener | 5 comments | | HN request time: 0s | source
Show context
podgietaru ◴[] No.45032841[source]
I have looked suicide in the eyes before. And reading the case file for this is absolutely horrific. He wanted help. He was heading in the direction of help, and he was stopped from getting it.

He wanted his parents to find out about his plan. I know this feeling. It is the clawing feeling of knowing that you want to live, despite feeling like you want to die.

We are living in such a horrific moment. We need these things to be legislated. Punished. We need to stop treating them as magic. They had the tools to prevent this. They had the tools to stop the conversation. To steer the user into helpful avenues.

When I was suicidal, I googled methods. And I got the number of a local hotline. And I rang it. And a kind man talked me down. And it potentially saved my life. And I am happier, now. I live a worthwhile life, now.

But at my lowest.. An AI Model designed to match my tone and be sycophantic to my every whim. It would have killed me.

replies(18): >>45032890 #>>45035840 #>>45035988 #>>45036257 #>>45036299 #>>45036318 #>>45036341 #>>45036513 #>>45037567 #>>45037905 #>>45038285 #>>45038393 #>>45039004 #>>45047014 #>>45048457 #>>45048890 #>>45052019 #>>45066389 #
stavros ◴[] No.45036513[source]
> When ChatGPT detects a prompt indicative of mental distress or self-harm, it has been trained to encourage the user to contact a help line. Mr. Raine saw those sorts of messages again and again in the chat, particularly when Adam sought specific information about methods. But Adam had learned how to bypass those safeguards by saying the requests were for a story he was writing.
replies(6): >>45036630 #>>45037615 #>>45038613 #>>45043686 #>>45045543 #>>45046708 #
toofy ◴[] No.45037615[source]
why did you leave out the most important piece of context?

he didn’t go out of his way to learn how to bypass the safeguards, it specifically told him how to get around the limit by saying, i’m not allowed to talk to you about suicide, however, if you tell me it’s for writing a story i can discuss it as much as you like.

replies(3): >>45037669 #>>45037680 #>>45039311 #
mothballed ◴[] No.45037669[source]
Because that's the factual bounds of the law, in places where suicide is illegal. ChatGPT is just being the 4chan chatbot, if you don't like that roleplaying suicide is OK then you're going to have to amend the first amendment.
replies(1): >>45037810 #
PostOnce ◴[] No.45037810{3}[source]
The constitution grants no rights to robots, and they have no freedom of speech, so no amendment is necessary.
replies(1): >>45037856 #
mothballed ◴[] No.45037856{4}[source]
The constitution grants no rights to books, and they have no freedom of speech, so no amendment is necessary.
replies(1): >>45037907 #
podgietaru ◴[] No.45037907{5}[source]
What? Is this deliberately obtuse?

Books are not granted freedom of speech, authors are. Their method is books. This is like saying sound waves are not granted freedom of speech.

Unless you're suggesting there's a man sat behind every ChatGPT chat your analogy is nonsense.

replies(1): >>45037976 #
1. mothballed ◴[] No.45037976{6}[source]
Yes I am saying there is a man "sat" as it were behind every ChatGPT chat. The authors of ChatGPT basically made something closer to a turing-complete "choose-your-own adventure" book. They ensured you could choose an adventure where the reader can choose a suicide roleplay adventure, but it is up to the reader whether they want to flip to that page. If they want to flip to the page that says "suicide" then it will tell them exactly what the law is, they can only do a suicide adventure if it is a roleplaying story.

By banning chatGPT you infringe upon the speech of the authors and the client. Their "method of speech" as you put it in this case is ChatGPT.

replies(3): >>45038441 #>>45039714 #>>45041201 #
2. ipython ◴[] No.45038441[source]
It takes intent and effort to publish or speak. That’s not present here. None of the authors who have “contributed” to the training data of any ai bot have consented to such.

In addition, the exact method at work here - model alignment - is something that model providers specifically train models for. The raw pre training data is only the first step and doesn’t on its own produce a usable model.

So in effect the “choice” on how to respond to queries about suicide is as much influenced by OpenAIs decisions as it is by its original training data.

3. jojomodding ◴[] No.45039714[source]
There are consequences to speech. If you and I are in conversation and you convince me (repeatedly, over months, eventually successfully) to commit suicide then you will be facing a wrongful death lawsuit. If you publicize books claiming known falsehoods about my person, you'll be facing a libel lawsuit. And so on.

If we argue that chatbots are considered constitutionally protected speech of their programmers or whatever, then the programmers should in turn be legally responsible. I guess this is what this lawsuit mentioned in the article is about. The principle behind this is not just about suicide but also about more mundane things like the model hallucinating falsehoods about public figures, damaging their reputation.

replies(1): >>45039974 #
4. mothballed ◴[] No.45039974[source]
I don't see how this goes any other way. The law is not going to make some 3rd rail for AI.
5. imtringued ◴[] No.45041201[source]
The author is the suicidal kid in this case though.