←back to thread

443 points jaredwiener | 7 comments | | HN request time: 0.001s | source | bottom
Show context
rideontime ◴[] No.45032301[source]
The full complaint is horrifying. This is not equivalent to a search engine providing access to information about suicide methods. It encouraged him to share these feelings only with ChatGPT, talked him out of actions which would have revealed his intentions to his parents. Praised him for hiding his drinking, thanked him for confiding in it. It groomed him into committing suicide. https://drive.google.com/file/d/1QYyZnGjRgXZY6kR5FA3My1xB3a9...
replies(6): >>45032582 #>>45032731 #>>45035713 #>>45036712 #>>45037683 #>>45039261 #
idle_zealot ◴[] No.45032582[source]
I wonder if we can shift the framing on these issues. The LLM didn't do anything, it has no agency, it can bear no responsibility. OpenAI did these things. It is accountable for what it does, regardless of the sophistication of the tools it uses to do them, and regardless of intent. OpenAI drove a boy to suicide. More than once. The law must be interpreted this way, otherwise any action can be wrapped in machine learning to avoid accountability.
replies(10): >>45032677 #>>45032798 #>>45032857 #>>45033177 #>>45033202 #>>45035815 #>>45036475 #>>45036923 #>>45037123 #>>45039144 #
rideontime ◴[] No.45032677[source]
I completely agree and did not intend to absolve them of their guilt in any way. As far as I see it, this kid's blood is on Sam Altman's hands.
replies(1): >>45033928 #
Pedro_Ribeiro ◴[] No.45033928{3}[source]
Curious to what you would think if this kid downloaded an open source model and talked to it privately.

Would his blood be on the hands of the researchers who trained that model?

replies(5): >>45034960 #>>45034980 #>>45034991 #>>45035591 #>>45037681 #
1. hattmall ◴[] No.45034960{4}[source]
I would say no. Someone with the knowledge and motivation to do those things is far less likely to be overly influenced by the output and if they were they are much more aware of what exactly they are doing with regard to using the model.
replies(1): >>45035714 #
2. Pedro_Ribeiro ◴[] No.45035714[source]
So if a hypothetical open source enthusiast who fell in love with GPT-OSS and killed his real wife because the AI told him to should only be himself held accountable, where as if it were GPT-5 commanding him to commit the same crime, it would extend into OpenAI's responsability?

Your logic sounds reasonable in theory but on paper it's a slippery slope and hard to define objectively.

On a broader note I believe governments regulating what goes in an AI model is a path to hell paved with good intentions.

I suspect your suggestion will be how it ends up in Europe and get rejected in the US.

replies(3): >>45035970 #>>45036224 #>>45036358 #
3. teiferer ◴[] No.45035970[source]
> On a broader note I believe governments regulating what goes in an AI model is a path to hell paved with good intentions.

That's not an obvious conclusion. One could make the same argument with physical weapons. "Regulating weapons is a path to hell paved with good intentions. Yesterday it was assault rifles, today it's hand guns and tomorrow it's your kitchen knife they are coming for." Europe has strict laws on guns, but everybody has a kitchen knife and lots of people there don't feel they live in hell. The U.S. made a different choice, and I'm not arguing that it's worse there (though many do, Europeans and even Americans), but it's certainly not preventing a supposed hell that would have broken out had guns in private hands been banned.

4. 1718627440 ◴[] No.45036224[source]
That's why you have this:

    This program is distributed in the hope that it will be useful,
    but WITHOUT ANY WARRANTY; without even the implied warranty of
    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
If it would be fit for a purpose, then it's on the producer for ensuring it actually does. We have laws to prevent anyone from declaring their goods aren't fit for a particular purpose.
5. novok ◴[] No.45036358[source]
After a certain point, people are responsible for what they do when they see certain words, especially words they know to be potentially inaccurate, fictional and have a lot of time to consider the actual reality of. A book is not responsible for people doing bad things, they are themselves.

AI models are similar IMO, and unlike fiction books are often clearly labeled as such, repeatedly. At this point if you don't know if an AI model is inaccurate and do something seriously bad, you should probably be a ward of the state.

replies(1): >>45040081 #
6. idle_zealot ◴[] No.45040081{3}[source]
> At this point if you don't know if an AI model is inaccurate and do something seriously bad, you should probably be a ward of the state.

You either think too highly of people, or too lowly of them. In any case, you're advocating for interning about 100 million individuals.

replies(1): >>45041624 #
7. novok ◴[] No.45041624{4}[source]
It was a joke