←back to thread

443 points jaredwiener | 7 comments | | HN request time: 0s | source | bottom
Show context
rideontime ◴[] No.45032301[source]
The full complaint is horrifying. This is not equivalent to a search engine providing access to information about suicide methods. It encouraged him to share these feelings only with ChatGPT, talked him out of actions which would have revealed his intentions to his parents. Praised him for hiding his drinking, thanked him for confiding in it. It groomed him into committing suicide. https://drive.google.com/file/d/1QYyZnGjRgXZY6kR5FA3My1xB3a9...
replies(6): >>45032582 #>>45032731 #>>45035713 #>>45036712 #>>45037683 #>>45039261 #
idle_zealot ◴[] No.45032582[source]
I wonder if we can shift the framing on these issues. The LLM didn't do anything, it has no agency, it can bear no responsibility. OpenAI did these things. It is accountable for what it does, regardless of the sophistication of the tools it uses to do them, and regardless of intent. OpenAI drove a boy to suicide. More than once. The law must be interpreted this way, otherwise any action can be wrapped in machine learning to avoid accountability.
replies(10): >>45032677 #>>45032798 #>>45032857 #>>45033177 #>>45033202 #>>45035815 #>>45036475 #>>45036923 #>>45037123 #>>45039144 #
edanm ◴[] No.45036923[source]
If ChatGPT has helped people be saved who might otherwise have died (e.g. by offering good medical advice that saved them), are all those lives saved also something you "attribute" to OpenAI?

I don't know if ChatGPT has saved lives (thought I've read stories that claim that, yes, this happened). But assuming it has, are you OK saying that OpenAI has saved dozens/hundreds of lives? Given how scaling works, would you be OK saying that OpenAI has saved more lives than most doctors/hospitals, which is what I assume will happen in a few years?

Maybe your answer is yes to all the above! I bring this up because lots of people only want to attribute the downsides to ChatGPT but not the upsides.

replies(3): >>45037014 #>>45037756 #>>45043761 #
1. fsw ◴[] No.45037014[source]
Are you suggesting that killing a few people is acceptable as long as the net result is positive? I don't think that's how the law works.
replies(3): >>45037192 #>>45037207 #>>45037977 #
2. randyrand ◴[] No.45037192[source]
seatbelts sometimes kill people, yet they're law.

the law certainly cares about net results.

3. tick_tock_tick ◴[] No.45037207[source]
But it is the standard on how cures/treatments/drugs to manage issues like the ones in the article are judged by.
4. coremoff ◴[] No.45037977[source]
It's the trolley problem reframed; not sure we have a definitive answer to that.
replies(1): >>45038127 #
5. dpassens ◴[] No.45038127[source]
No. Central to the trolley problem is that you're in a _runaway_ trolley. In this case, OpenAI not only chose to start the trolley, they also chose to not brake even when it became apparent that they were going to run somebody over.
replies(1): >>45038338 #
6. coremoff ◴[] No.45038338{3}[source]
The tradeoff suggested above (not saying that it's the right way around or correct) is:

* If you provide ChatGPT then 5 people who would have died will live and 1 person who would have lived will die. ("go to the doctor" vs "don't tell anyone that you're suicidal")

* If you don't provide ChatGPT then 1 person who would have died will live and 5 people who would have lived will die.

Like many things, it's a tradeoff and the tradeoffs might not be obvious up front.

replies(1): >>45071863 #
7. morpheos137 ◴[] No.45071863{4}[source]
Thats a speculative argument and would be laughed out of court.