←back to thread

165 points distalx | 1 comments | | HN request time: 0s | source
Show context
sheepscreek ◴[] No.43949463[source]
That’s fair but there’s another nuance that they can’t solve for. Cost and availability.

AI is not a substitute for traditional therapy, but it offers an 80% benefit at a fraction of the cost. It could be used to supplement therapy, for the periods between sessions.

The biggest risk is with privacy. Meta could not be trusted knowing what you’re going to wear or eat. Now imagine them knowing your deepest darkest secrets. The advertising business model does not gel well with providing mental health support. Subscription (with privacy guarantees) is the way to go.

replies(5): >>43949589 #>>43949591 #>>43950064 #>>43950278 #>>43950547 #
zdragnar ◴[] No.43950278[source]
> The biggest risk is with privacy

No, the biggest risk is that it behaves in ways that actively harm users in a fragile emotional state, whether by enabling or pushing them into dangerous behavior.

Many people are already demonstrably unable to handle normal AI chatbots in a healthy manner. A "therapist" substitute that takes a position of authority as a counselor ramps that danger up drastically.

replies(1): >>43958253 #
1. sheepscreek ◴[] No.43958253{3}[source]
You’re saying that as if AI is a singular thing. It is not.

Also, for every nay sayer I encounter now, I’m going to start by asking “Have you ever taken therapy? For how long? Why did you stop? Did it help?”

Therapy isn’t a silver bullet. Finding a therapist that works for you takes years of patient trial and error.