A single positive outcome is not enough to judge the technology beneficial, let alone safe.
Instead, this just came up in my feed: https://arstechnica.com/tech-policy/2025/08/chatgpt-helped-t...
For context, my friends and family are in the northern Midwest. Average people, not early adopters of new technology.
It's almost as if we've built systems around this stuff for a reason.
I'm not defending the use of AI chatbots, but you'd be hard-pressed to come up with a worse solution for depression than the medical system.
Yes. For topics with lots of training data like physics Claude is VERY human sounding. I've had very interesting conversations with Claude Opus about the Boltzmann brain issue and how I feel that the conventional wisdom ignores the low probability of a BBrain having a spatially and temporally consistent set of memories and how the fact that brains existing in a universe that automatically creates consistent memories means the probability of us being Boltzmann brains is very low. Since even if a Boltzmann brain pops into existence its memory will be most likely completely random and completely insane/insensate.
There aren't a lot of people who want to talk about Boltzmann brains.
No, Claude does know a LOT more than I do about most things and does push back on a lot of things. Sometimes I am able to improve my reasoning and other times I realize I was wrong.
Trust me, I am aware of the linear algebra behind the curtain! But even when you mostly understand how they work the best LLMs today are very impressive. And latent spaces fundamentally new way to index data.
I do find LLMs very useful and am extremely impressed by them, I'm not saying you can't learn things this way at all.
But there's nobody else on the line with you. And while they will emit text which contradicts what you say if it's wrong enough, they've been heavily trained to match where you're steering things, even if you're trying to avoid doing any steering.
You can mostly understand how these work and still end up in a feedback loop that you don't realize is a feedback loop. I think this might even be more likely the more the thing has to offer you in terms of learning - the less qualified you are on the subject, the less you can tell when it's subtly yes-and'ing you.
https://news.ycombinator.com/item?id=45027043
I recommend you get in the habit of searching for those. They are often posted, guaranteed on popular stories. Commenting without context does not make for good discussion.
The current generation of LLMs have had their controversies, but these are still pre alpha products, and I suspect in the future we will look back on releasing them unleashed as a mistake. There's no reason the mistakes they make today can't be improved upon.
If your experiences with learning from a machine are similar to mine, then we can both see a whole new world coming that's going to take advantage of this interface.
Colin Fraser had a good tweet about this: https://xcancel.com/colin_fraser/status/1956414662087733498#...
In a therapy session, you're actually going to do most of the talking. It's hard. Your friend is going to want to talk about their own stuff half the time and you have to listen. With an LLM, it's happy to do 99% of the talking, and 100% of it is about you.
We spent a long time finding something, but when we did it worked exceptionally well. We absolutely did not just increase the dose. And I'm almost certain the literature for this would NOT recommend an increase of dosage if the side effect was increased suicidality.
The demonisation of medication needs to stop. It is an important tool in the toolbelt for depression. It is not the end of the journey, but it makes that journey much easier to walk.
Most people are prescribed antidepressants by their GP/PCP after a short consultation.
In my case, I went to the doctor, said I was having problems with panic attacks, they asked a few things to make sure it was unlikely to be physical and then said to try sertraline. I said OK. In and out in about 5 minutes, and I've been on it for 3 years now without a follow up with a human. Every six months I do have to fill in an online questionnaire when getting a new prescription which asks if I've had any negative side effects. I've never seen a psychiatrist or psychologist in my life.
From discussions with friends and other acquaintances, this is a pretty typical experience.
P.S. This isn't in any way meant to be critical. Sertraline turned my life around.
Even in the worst experiences, I had a followup appointment in 2, 4 and 6 weeks to check the medication.
In this current case, the outcome is horrible, and the answers that ChatGPT provided were inexcusable. But looking at a bigger picture, how much of a better chance does a person have by everyone telling them to "go to therapy" or to "talk to others" and such? What others? Searching "online therapy", BetterHelp is the second result. BetterHelp doesn't exactly have a good reputation online, but still, their influence is widespread. Licensed therapists can also be bad actors. There is no general "good thing" that is tried and true for every particular case of human mental health, but even letting that go, the position is abused just as any other authority / power position is, with many bad therapists out there. Not to mention the other people who pose as (mental) health experts, life coaches, and such. Or the people who recruit for a cult.
Frankly, even in the face of this horrible event, I'm not convinced that AI in general fares that much lower than the sum of the people who offer a recipe for a better life, skills, company, camaraderie. Rather, I feel like that AI is in a situation like the self-driving cars are, where we expect the new thing to be 110%, even though we know that the old thing is far for perfect.
I do think that OpenAI is liable though, and rightfully so. Their service has a lot of power to influence, clearly outlined in the tragedy that is shown in the article. And so, they also have a lot of responsibility to reign that in. If they were a forum where the teen was pushed to suicide, police could go after the forum participants, moderators, admins. But in case of OpenAI, there is no such person, the service itself is the thing. So the one liable must be the company that provides the service.
I understand the emotional impact of what happened in this case, but there is not much to discuss if we just reject everything outright.
Opioids in the US are probably the most famous case though: https://en.wikipedia.org/wiki/Opioid_epidemic
Joking aside, they do seem to escalate more to specialists whereas we do more at the GP level.
The many people who don't commit suicide because an AI confidant helped them out are never ever gonna make the news. Meanwhile the opposite cases are "TODAY'S TOP HEADLINE" and that's what people discuss.
https://news.ycombinator.com/item?id=44980896#44980913
I believe it absolutely should be, and it can even be applied to rare disease diagnosis.
My child was just saved by AI. He suffered from persistent seizures, and after visiting three hospitals, none were able to provide an accurate diagnosis. Only when I uploaded all of his medical records to an AI system did it immediately suggest a high suspicion of MOGAD-FLAMES — a condition with an epidemiology of roughly one in ten million.
Subsequent testing confirmed the diagnosis, and with the right treatment, my child recovered rapidly.
For rare diseases, it is impossible to expect every physician to master all the details. But AI excels at this. I believe this may even be the first domain where both doctors and AI can jointly agree that deployment is ready to begin.