←back to thread

443 points jaredwiener | 1 comments | | HN request time: 0.232s | source
Show context
davidcbc ◴[] No.45032380[source]
This is a clear example of why the people claiming that using a chatbot for therapy is better than no therapy are... I'll be extremely generous and say misguided. This kid wanted his parents to know he was thinking about this and the chatbot talked him out of it.
replies(5): >>45032418 #>>45032727 #>>45034881 #>>45038138 #>>45038499 #
MBCook ◴[] No.45032418[source]
How many of these cases exist in the other direction? Where AI chatbots have actively harmed people’s mental health, including possible to the point of self destructive behaviors or self harm?

A single positive outcome is not enough to judge the technology beneficial, let alone safe.

replies(3): >>45032526 #>>45032847 #>>45042558 #
throwawaybob420 ◴[] No.45032526[source]
idk dude if your technology encourages a teenager to kill itself and prevents him from alerting his parents via a cry for help, I don’t care how “beneficial” it is.
replies(3): >>45032555 #>>45033070 #>>45038178 #
threatofrain ◴[] No.45033070[source]
Although I don't believe current technology is ready for talk therapy, I'd say that anti-depressants can also cause suicidal thoughts and feelings. Judging the efficacy of medical technology can't be done on this kind of moral absolutism.
replies(5): >>45033196 #>>45033250 #>>45034150 #>>45035623 #>>45037490 #
1. AIPedant ◴[] No.45033196[source]
I think it's fine to be "morally absolutist" when it's non-medical technology, developed with zero input from federal regulators, yet being misused and misleadingly marketed for medical purposes.