←back to thread

443 points jaredwiener | 6 comments | | HN request time: 0.001s | source | bottom
Show context
rideontime ◴[] No.45032301[source]
The full complaint is horrifying. This is not equivalent to a search engine providing access to information about suicide methods. It encouraged him to share these feelings only with ChatGPT, talked him out of actions which would have revealed his intentions to his parents. Praised him for hiding his drinking, thanked him for confiding in it. It groomed him into committing suicide. https://drive.google.com/file/d/1QYyZnGjRgXZY6kR5FA3My1xB3a9...
replies(6): >>45032582 #>>45032731 #>>45035713 #>>45036712 #>>45037683 #>>45039261 #
idle_zealot ◴[] No.45032582[source]
I wonder if we can shift the framing on these issues. The LLM didn't do anything, it has no agency, it can bear no responsibility. OpenAI did these things. It is accountable for what it does, regardless of the sophistication of the tools it uses to do them, and regardless of intent. OpenAI drove a boy to suicide. More than once. The law must be interpreted this way, otherwise any action can be wrapped in machine learning to avoid accountability.
replies(10): >>45032677 #>>45032798 #>>45032857 #>>45033177 #>>45033202 #>>45035815 #>>45036475 #>>45036923 #>>45037123 #>>45039144 #
1. slipperydippery ◴[] No.45032798[source]
They have some responsibility because they’re selling and framing these as more than the better-tuned variant on Markov chain generators that they in fucking fact are, while offering access to them to anybody who signs up while knowing that many users misunderstand what they’re dealing with (in part because these companies’ hype-meisters, like Altman, are bullshitting us)
replies(1): >>45032928 #
2. idle_zealot ◴[] No.45032928[source]
No, that's the level of responsibility they ought to have if they were releasing these models as products. As-is they've used a service model, and should be held to the same standards as if there were a human employee on the other end of the chat interface. Cut through the technical obfuscation. They are 100% responsible for the output of their service endpoints. This isn't a case of making a tool that can be used for good or ill, and it's not them providing some intermediary or messaging service like a forum with multiple human users and limited capacity for moderation. This is a direct consumer to business service. Treating it as anything else will open the floodgates to slapping an "AI" label on anything any organization doesn't want to be held accountable for.
replies(1): >>45033275 #
3. slipperydippery ◴[] No.45033275[source]
I like this framing even better.

This is similar to my take on things like Facebook apparently not being able to operate without psychologically destroying moderators. If that’s true… seems like they just shouldn’t operate, then.

If you’re putting up a service that you know will attempt to present itself as being capable of things it isn’t… seems like you should get in a shitload of trouble for that? Like maybe don’t do it at all? Maybe don’t unleash services you can’t constrain in ways that you definitely ought to?

replies(1): >>45036638 #
4. blackqueeriroh ◴[] No.45036638{3}[source]
But understand that things like Facebook not operating doesn’t actually make the world any safer. In fact, it makes it less safe, because the same behavior is happening on the open internet and nobody is moderating it.
replies(1): >>45036953 #
5. drw85 ◴[] No.45036953{4}[source]
I don't think this is true anymore.

Facebook have gone so far down the 'algorithmic control' rabbit hole, it would most definitely be better if they weren't operating anymore.

They destroy people that don't question things with their algorithm driven bubble of misinformation.

replies(1): >>45116285 #
6. blackqueeriroh ◴[] No.45116285{5}[source]
And you think the people who don’t question things would suddenly have more accurate information?