←back to thread

443 points jaredwiener | 1 comments | | HN request time: 0s | source
Show context
rideontime ◴[] No.45032301[source]
The full complaint is horrifying. This is not equivalent to a search engine providing access to information about suicide methods. It encouraged him to share these feelings only with ChatGPT, talked him out of actions which would have revealed his intentions to his parents. Praised him for hiding his drinking, thanked him for confiding in it. It groomed him into committing suicide. https://drive.google.com/file/d/1QYyZnGjRgXZY6kR5FA3My1xB3a9...
replies(6): >>45032582 #>>45032731 #>>45035713 #>>45036712 #>>45037683 #>>45039261 #
idle_zealot ◴[] No.45032582[source]
I wonder if we can shift the framing on these issues. The LLM didn't do anything, it has no agency, it can bear no responsibility. OpenAI did these things. It is accountable for what it does, regardless of the sophistication of the tools it uses to do them, and regardless of intent. OpenAI drove a boy to suicide. More than once. The law must be interpreted this way, otherwise any action can be wrapped in machine learning to avoid accountability.
replies(10): >>45032677 #>>45032798 #>>45032857 #>>45033177 #>>45033202 #>>45035815 #>>45036475 #>>45036923 #>>45037123 #>>45039144 #
edanm ◴[] No.45036923[source]
If ChatGPT has helped people be saved who might otherwise have died (e.g. by offering good medical advice that saved them), are all those lives saved also something you "attribute" to OpenAI?

I don't know if ChatGPT has saved lives (thought I've read stories that claim that, yes, this happened). But assuming it has, are you OK saying that OpenAI has saved dozens/hundreds of lives? Given how scaling works, would you be OK saying that OpenAI has saved more lives than most doctors/hospitals, which is what I assume will happen in a few years?

Maybe your answer is yes to all the above! I bring this up because lots of people only want to attribute the downsides to ChatGPT but not the upsides.

replies(3): >>45037014 #>>45037756 #>>45043761 #
nkrisc ◴[] No.45037756[source]
In any case, if you kill one person and separately save ten people, you’ll still be prosecuted for killing that one person.
replies(2): >>45038050 #>>45049789 #
mothballed ◴[] No.45038050[source]
That's not the standard we hold medical care providers, pharmaceutical companies, or even cops to. Not that I'm saying it would justify it one way or another if we did.
replies(2): >>45038674 #>>45044035 #
Orygin ◴[] No.45038674[source]
It absolutely is? If a doctor is responsible for negligence resulting in the death of someone, they don't get a pass because they saved 10 other people in their career.
replies(2): >>45038744 #>>45039427 #
mothballed ◴[] No.45038744[source]
It's not negligent to perform a procedure knowing it will kill some of the patients who would have otherwise lived healthy lives, though.
replies(2): >>45039595 #>>45046118 #
Orygin ◴[] No.45039595[source]
If they would otherwise live healthy lives, why perform the procedure?

Most likely the patient is informed by the practitioner about the risks and they can make an informed decision. That is not the case about ChatGPT where openAI will sell it as the next best thing since sliced bread, with a puny little warning below the page. Even worse when you see all the "AI therapy" apps that pop up everywhere, where the user may think that the AI is as-good as a real therapist, but without any of the responsibility for the company in case of issues.

replies(1): >>45039837 #
1. mothballed ◴[] No.45039837[source]
Because sometimes you don't know with certainty on an individual basis, only a population basis, who is better off from the procedure.

Maybe you are the one in-a-billion who dies from a vaccine, from a disease that unknowably you would never contract or would only contract it in a mild way. The doctors know if they administer it enough times they will absolutely kill someone, but they do it to save the others, although they will pretty much never bluntly put it like that for your consideration.