←back to thread

443 points jaredwiener | 2 comments | | HN request time: 0.449s | source
Show context
Workaccount2 ◴[] No.45032856[source]
It's hard to see what is going on without seeing the actual chats, as opposed to the snippets in the lawsuit. A lot of suicidal people talk to these LLMs for therapy, and the reviews on the whole seem excellent. I'm not ready to jump on the bandwagon only seeing a handcrafted complaint.

Ironically though I could still see lawsuits like this weighing heavily on the sycophancy that these models have, as the limited chat excerpts given have that strong stench of "you are so smart and so right about everything!". If lawsuits like this lead to more "straight honest" models, I could see even more people killing themselves when their therapist model says "Yeah, but you kind of actually do suck".

replies(5): >>45036493 #>>45037426 #>>45037539 #>>45037730 #>>45037971 #
1. dartharva ◴[] No.45037971[source]
A commenter above in this thread posted the full complaint, which contains the actual chats. Read through them, seriously, they are beyond horrifying: https://drive.google.com/file/d/1QYyZnGjRgXZY6kR5FA3My1xB3a9...
replies(1): >>45039842 #
2. Workaccount2 ◴[] No.45039842[source]
My comment is based on reading over the complaint, but in reality the case will involve the full context of each chat as well the the users full usage history. Understand that the complaint presented was written by the family's attorney, so it is going to be the absolute strongest construction of "ChatGPT is a killer, and OpenAI is complacent" you can make from the pile of facts. Initial complaints like this are the click-bait/rage-bait of the legal world.

I'm not making a judgement here, just leveraging the internet wisdom that comes from decades of doing this kind of drill.