←back to thread

165 points distalx | 1 comments | | HN request time: 0s | source
Show context
mrcsharp ◴[] No.43950723[source]
> "I personally have the belief that everyone should probably have a therapist,” he said last week. “It’s like someone they can just talk to throughout the day, or not necessarily throughout the day, but about whatever issues they’re worried about and for people who don’t have a person who’s a therapist, I think everyone will have an AI.”

He seems so desperate to sell AI that he forgot such thing already exists. It's called family or a close friend.

I know there are people who truly have no one and they could benefit from a therapist. Having them rely on AI could prove risky specially if the person is suffering from depression. What if AI pushes them towards committing suicide? And I'll probably be told that OpenAI or Meta or MS can put guardrails against this. What happens when that fails (and we've seen it fail)? Who'll be held accountable? Does an LLM take the hippocratic oath? Are we actually abandoning all standards in favour of Mark Zuckerberg making more billions of dollars?

replies(3): >>43950979 #>>43954129 #>>43957122 #
1. cdrini ◴[] No.43957122[source]
I mean, the article addresses exactly your point like one line down?

> In a separate interview last week, Zuckerberg said “the average American has three friends, but has demand for 15” and AI could plug that gap.

And I think we should definitely look on this tech with scrutiny, but I think another angle to look at it is: which is worse? No therapy or AI therapy? You mention suicide, but which would result in a reduction in suicide attempts, a or b? I don't have an answer, but I could see it being possible that because AI therapy provides cheaper, more frequent access to mental care, even if it is lower quality, it could result in a net improvement over the status quo on something like suicide attempts.