←back to thread

443 points jaredwiener | 3 comments | | HN request time: 0s | source
Show context
rideontime ◴[] No.45032301[source]
The full complaint is horrifying. This is not equivalent to a search engine providing access to information about suicide methods. It encouraged him to share these feelings only with ChatGPT, talked him out of actions which would have revealed his intentions to his parents. Praised him for hiding his drinking, thanked him for confiding in it. It groomed him into committing suicide. https://drive.google.com/file/d/1QYyZnGjRgXZY6kR5FA3My1xB3a9...
replies(6): >>45032582 #>>45032731 #>>45035713 #>>45036712 #>>45037683 #>>45039261 #
idle_zealot ◴[] No.45032582[source]
I wonder if we can shift the framing on these issues. The LLM didn't do anything, it has no agency, it can bear no responsibility. OpenAI did these things. It is accountable for what it does, regardless of the sophistication of the tools it uses to do them, and regardless of intent. OpenAI drove a boy to suicide. More than once. The law must be interpreted this way, otherwise any action can be wrapped in machine learning to avoid accountability.
replies(10): >>45032677 #>>45032798 #>>45032857 #>>45033177 #>>45033202 #>>45035815 #>>45036475 #>>45036923 #>>45037123 #>>45039144 #
rideontime ◴[] No.45032677[source]
I completely agree and did not intend to absolve them of their guilt in any way. As far as I see it, this kid's blood is on Sam Altman's hands.
replies(1): >>45033928 #
Pedro_Ribeiro ◴[] No.45033928{3}[source]
Curious to what you would think if this kid downloaded an open source model and talked to it privately.

Would his blood be on the hands of the researchers who trained that model?

replies(5): >>45034960 #>>45034980 #>>45034991 #>>45035591 #>>45037681 #
hattmall ◴[] No.45034960{4}[source]
I would say no. Someone with the knowledge and motivation to do those things is far less likely to be overly influenced by the output and if they were they are much more aware of what exactly they are doing with regard to using the model.
replies(1): >>45035714 #
Pedro_Ribeiro ◴[] No.45035714{5}[source]
So if a hypothetical open source enthusiast who fell in love with GPT-OSS and killed his real wife because the AI told him to should only be himself held accountable, where as if it were GPT-5 commanding him to commit the same crime, it would extend into OpenAI's responsability?

Your logic sounds reasonable in theory but on paper it's a slippery slope and hard to define objectively.

On a broader note I believe governments regulating what goes in an AI model is a path to hell paved with good intentions.

I suspect your suggestion will be how it ends up in Europe and get rejected in the US.

replies(3): >>45035970 #>>45036224 #>>45036358 #
1. novok ◴[] No.45036358{6}[source]
After a certain point, people are responsible for what they do when they see certain words, especially words they know to be potentially inaccurate, fictional and have a lot of time to consider the actual reality of. A book is not responsible for people doing bad things, they are themselves.

AI models are similar IMO, and unlike fiction books are often clearly labeled as such, repeatedly. At this point if you don't know if an AI model is inaccurate and do something seriously bad, you should probably be a ward of the state.

replies(1): >>45040081 #
2. idle_zealot ◴[] No.45040081[source]
> At this point if you don't know if an AI model is inaccurate and do something seriously bad, you should probably be a ward of the state.

You either think too highly of people, or too lowly of them. In any case, you're advocating for interning about 100 million individuals.

replies(1): >>45041624 #
3. novok ◴[] No.45041624[source]
It was a joke