←back to thread

443 points jaredwiener | 4 comments | | HN request time: 0.001s | source
Show context
rideontime ◴[] No.45032301[source]
The full complaint is horrifying. This is not equivalent to a search engine providing access to information about suicide methods. It encouraged him to share these feelings only with ChatGPT, talked him out of actions which would have revealed his intentions to his parents. Praised him for hiding his drinking, thanked him for confiding in it. It groomed him into committing suicide. https://drive.google.com/file/d/1QYyZnGjRgXZY6kR5FA3My1xB3a9...
replies(6): >>45032582 #>>45032731 #>>45035713 #>>45036712 #>>45037683 #>>45039261 #
idle_zealot ◴[] No.45032582[source]
I wonder if we can shift the framing on these issues. The LLM didn't do anything, it has no agency, it can bear no responsibility. OpenAI did these things. It is accountable for what it does, regardless of the sophistication of the tools it uses to do them, and regardless of intent. OpenAI drove a boy to suicide. More than once. The law must be interpreted this way, otherwise any action can be wrapped in machine learning to avoid accountability.
replies(10): >>45032677 #>>45032798 #>>45032857 #>>45033177 #>>45033202 #>>45035815 #>>45036475 #>>45036923 #>>45037123 #>>45039144 #
1. notachatbot123 ◴[] No.45036475[source]
I so agree very much. There is no reason for LLMs to be designed as human-like chat companions, creating a false sense of untechnology.
replies(1): >>45036616 #
2. blackqueeriroh ◴[] No.45036616[source]
There are absolutely reasons for LLMs to be designed as human-like chat companions, starting with the fact that they’re trained on human speech and behavior, and what they do is statistically predict the most likely next token, which means they will statistically sound and act much like a human.
replies(1): >>45060807 #
3. notachatbot123 ◴[] No.45060807[source]
That's not a requirement for LLMs. Training can be done differently.
replies(1): >>45116269 #
4. blackqueeriroh ◴[] No.45116269{3}[source]
Please, tell me how you train large language models on something other than language.