←back to thread

443 points jaredwiener | 1 comments | | HN request time: 0.225s | source
Show context
davidcbc ◴[] No.45032380[source]
This is a clear example of why the people claiming that using a chatbot for therapy is better than no therapy are... I'll be extremely generous and say misguided. This kid wanted his parents to know he was thinking about this and the chatbot talked him out of it.
replies(5): >>45032418 #>>45032727 #>>45034881 #>>45038138 #>>45038499 #
MBCook ◴[] No.45032418[source]
How many of these cases exist in the other direction? Where AI chatbots have actively harmed people’s mental health, including possible to the point of self destructive behaviors or self harm?

A single positive outcome is not enough to judge the technology beneficial, let alone safe.

replies(3): >>45032526 #>>45032847 #>>45042558 #
throwawaybob420 ◴[] No.45032526[source]
idk dude if your technology encourages a teenager to kill itself and prevents him from alerting his parents via a cry for help, I don’t care how “beneficial” it is.
replies(3): >>45032555 #>>45033070 #>>45038178 #
MBCook ◴[] No.45032555[source]
I agree. If there was one death for 1 million saves, maybe.

Instead, this just came up in my feed: https://arstechnica.com/tech-policy/2025/08/chatgpt-helped-t...

replies(2): >>45032698 #>>45034142 #
1. mvdtnz ◴[] No.45034142[source]
What on Earth? You're posting an article about the same thing we're already discussing. If you want to contribute to the conversation you owe it to the people who are taking time out of their day to engage with you to read the material under discussion.