←back to thread

443 points jaredwiener | 1 comments | | HN request time: 0.21s | source
Show context
Workaccount2 ◴[] No.45032856[source]
It's hard to see what is going on without seeing the actual chats, as opposed to the snippets in the lawsuit. A lot of suicidal people talk to these LLMs for therapy, and the reviews on the whole seem excellent. I'm not ready to jump on the bandwagon only seeing a handcrafted complaint.

Ironically though I could still see lawsuits like this weighing heavily on the sycophancy that these models have, as the limited chat excerpts given have that strong stench of "you are so smart and so right about everything!". If lawsuits like this lead to more "straight honest" models, I could see even more people killing themselves when their therapist model says "Yeah, but you kind of actually do suck".

replies(5): >>45036493 #>>45037426 #>>45037539 #>>45037730 #>>45037971 #
1. password321 ◴[] No.45037730[source]
>If lawsuits like this lead to more "straight honest" models, I could see even more people killing themselves when their therapist model says "Yeah, but you kind of actually do suck".

It is not one extreme or the other. o3 is nowhere near as sycophantic as 4o but it is also not going to tell you that you suck especially in a suicidal context. 4o was the mainstream model because OpenAI probably realised that this is what most people want rather than a more professional model like o3 (besides the fact that it also uses more compute).

The lawsuits probably did make them RLHF GPT-5 to be at least a bit more middle-ground though that led to backlash because people "missed" 4o due this type of behaviour so they made it bit more "friendly". Still not as bad as 4o.