Most active commenters
  • sheepscreek(7)
  • sarchertech(6)
  • caseyy(3)

←back to thread

165 points distalx | 26 comments | | HN request time: 1.35s | source | bottom
1. sheepscreek ◴[] No.43949463[source]
That’s fair but there’s another nuance that they can’t solve for. Cost and availability.

AI is not a substitute for traditional therapy, but it offers an 80% benefit at a fraction of the cost. It could be used to supplement therapy, for the periods between sessions.

The biggest risk is with privacy. Meta could not be trusted knowing what you’re going to wear or eat. Now imagine them knowing your deepest darkest secrets. The advertising business model does not gel well with providing mental health support. Subscription (with privacy guarantees) is the way to go.

replies(5): >>43949589 #>>43949591 #>>43950064 #>>43950278 #>>43950547 #
2. sarchertech ◴[] No.43949589[source]
Does it offer 80% of the benefit? An AI could match what a human therapist would say 80% (or 99%) of the time and still provide negative benefit.

Therapy seems like the last place an LLM would be beneficial because it’s very hard to keep an LLM from telling you what you want to hear. I can see anyway you could guarantee that a chatbot cause severe damage to a vulnerable patient by supporting their neurosis.

We’re not anywhere close to an LLM which is trained to be supportive and understanding in tone but will never affirm your irrational fears, insecurities, and delusions.

replies(2): >>43949648 #>>43949858 #
3. rsynnott ◴[] No.43949591[source]
> AI is not a substitute for traditional therapy, but it offers an 80% benefit at a fraction of the cost.

That... seems optimistic. See, for instance, https://www.rollingstone.com/culture/culture-features/ai-spi...

No psychologist will attempt to convince you that you are the messiah. In at least some cases, our robot overlords are doing _serious active harm_ which the subject would be unlikely to suffer in their absence. LLM therapists are rather likely to be worse than nothing, particularly given their tendency to be overly agreeable.

4. pitched ◴[] No.43949648[source]
Sometimes, the process of gathering our thoughts enough to article them into a prompt is where the benefit is. AI as the rubber duck has a lot of value. Understanding that this is what’s needed vs. something deeper, is beyond the scope of what AI can handle.
replies(2): >>43949682 #>>43949868 #
5. sarchertech ◴[] No.43949682{3}[source]
And that’s fine as long as the person using it has a sophisticated understanding of the technology and a company isn’t selling it as a “therapist”.

When an AI therapist from a health startup confirms that a mentally disturbed person is indeed hearing voices from God, or an insecure teenager uses meta AI as a therapist because Mark Zuckerberg said they should and it agrees with them that yes they are unloveable, then we have a problem.

replies(1): >>43949809 #
6. pitched ◴[] No.43949809{4}[source]
That last 20% of “missing nuance” is really important if someone is in that state! For the rest of us, the value of an AI therapist roughly matches journaling.
7. singpolyma3 ◴[] No.43949858[source]
I mean most forms of professional therapy the therapist shouldn't say much at all and certainly shouldn't give advice. The point is to have someone listen in a way that feels like they are really listening
replies(2): >>43950037 #>>43950125 #
8. sxyuan ◴[] No.43949868{3}[source]
If it's about gathering our thoughts, there's meditation. Or journaling. Or prayer. Some have even claimed that there is an all-powerful being listening to you on the other side with that last one. (One might even call it an intelligence, just not an artificial one.)

There's also talking to a friend. Sure, they could also steer you wrong, but at least they won't be impersonating a therapist, and they won't be doing it to try to please their investors.

9. sarchertech ◴[] No.43950037{3}[source]
Therapists don’t give advice in that they won’t tell you whether you should quit your job, or should you propose to your girlfriend. They will definitely give you basic guidance and confirm that your fears are overblown.

They will not under any circumstances tell you that “yes you are correct, Billy would be more likely to love you if you drop 30 more pounds by throwing up after eating”, but an LLM will if it goes off script.

replies(2): >>43950704 #>>43952329 #
10. caseyy ◴[] No.43950064[source]
> 80% benefit at a fraction of the cost

I'm sure 80% of expert therapists in any modality will disagree.

At best, AI can compete with telehealth therapy, which is known for having practically no quality standards. And of course, LLMs surpass "no quality standards" with flying colors.

I say this very rarely because I think such statements should be used with caution, but in this case: saying that LLMs can do 80% of a therapist's work is actually harmful for people who might believe it and not seek effective therapy. Going down this path has a good probability of costing someone dearly.

replies(1): >>43950200 #
11. caseyy ◴[] No.43950125{3}[source]
> most forms of professional therapy the therapist shouldn't say much at all

This is very untrue. Here is a list of psychotherapy modalities: https://en.wikipedia.org/wiki/List_of_psychotherapies. In most (almost every) modalities, the therapist provides an intervention and offers advice (by definition: guidance, recommendations).

There is Carl Rogers' client-centered therapy, non-directive supportive therapy, and that's it for low-intervention modalities off the top of my head. Two out of over a hundred. Hardly "most" at all.

replies(1): >>43950688 #
12. sheepscreek ◴[] No.43950200[source]
My statement is intended for individuals who cannot afford therapy. That’s why my comment centers on cost and availability (accessibility). It’s a frequently overlooked reason why people hesitate to seek therapy.

Given that, AI can be just as good as talking to a friend when you don’t have one (or feel uncomfortable discussing something with one).

replies(2): >>43950277 #>>43952843 #
13. caseyy ◴[] No.43950277{3}[source]
> AI can be just as good as talking to a friend when you don’t have one

This is not true, and it's not even wrong. You almost cannot argue with such a statement without being ridiculous. The best I can say is: natural language synthesis is not a substitute for friends.

If we are debating these things, it's evidence we adopted LLMs with far too little forethought.

I mean, on a technicality, you could say "my friend synthesizes plausible language, this can do it, too. So it can substitute a little bit!" but at that point I'm pretty sure we're not discussing friendship in its essence, and the (emotional, physical, social, etc) support that comes with it.

replies(2): >>43950345 #>>43950425 #
14. zdragnar ◴[] No.43950278[source]
> The biggest risk is with privacy

No, the biggest risk is that it behaves in ways that actively harm users in a fragile emotional state, whether by enabling or pushing them into dangerous behavior.

Many people are already demonstrably unable to handle normal AI chatbots in a healthy manner. A "therapist" substitute that takes a position of authority as a counselor ramps that danger up drastically.

replies(1): >>43958253 #
15. mvdtnz ◴[] No.43950345{4}[source]
No one said it was a substitute for a friend. The comment you're responding to is saying it's a substitute for no friends at all.
16. sheepscreek ◴[] No.43950425{4}[source]
I think we can dissect the arguments philosophically in many ways, even getting quite nitpicky if we like. So please indulge me for a moment.

“A friend” can also serve as a metaphor for an acquaintance you feel comfortable seeking counsel from.

17. zahlman ◴[] No.43950547[source]
In a field like personal therapy, giving good advice 80% of the time is nowhere near 80% benefit on net.
18. sheepscreek ◴[] No.43950688{4}[source]
This is very cool. Reading through the list, I discovered:

https://en.m.wikipedia.org/wiki/Person-centered_therapy

That sounds an awful lot like what current gen AIs are capable of.

I believe we are in the very early stages of AI-assisted therapy, much like the early days of psychology itself. Before we understood what was generally acceptable and what was not, it was a Wild West with medical practitioners employing harmful techniques such as lobotomy.

Because there are no standards on what constitutes an emotional support AI, or any agreed upon expectations from them, we can only go by what it seems to be capable of. And it seems to be capable of talking intelligently and logically with deep empathy. A rubber ducky 2.0 that can organize your thoughts and even infer meaning from them on demand.

19. sheepscreek ◴[] No.43950704{4}[source]
You can create an LLM to keep a check on the LLM interacting with people. This is basically what all “safety” etc models do - they work as gatekeepers for the more powerful model.

This is an implementation problem and not really a technical limitation. If anything, by focusing on a particular domain (like therapy), the do’s and don’ts become more clear.

replies(1): >>43951302 #
20. sarchertech ◴[] No.43951302{5}[source]
Sure you might be able to do that. Or it could turn out that the amount of harmful responses are so varied that trying to block all of them makes the therapy AI useless.

There is a very fine line between being understanding and supportive and enabling bad behavior. I’m not confident that a team of LLMs is going to be able to walk that line consistently anytime soon.

We can’t even get code generating LLMs to stop hallucinating APIs and code is a much narrower domain than therapy.

replies(1): >>43959057 #
21. casey2 ◴[] No.43952329{4}[source]
Telling that you need to make up some BS about LLMs while you say nothing about the many clients who have been assaulted, raped, or killed by their therapist.

How can you so confidently claim that "Therapists will do this and that, they won't do any evil". Did you even read what you posted?

replies(1): >>43954539 #
22. GreenWatermelon ◴[] No.43952843{3}[source]
> AI can be just as good as talking to a friend when you don’t have one

This sentence effectively reads "AI cam be just as good as (nothing)" since you can't talk to a friend when you don't have one.

Of course, I understand the point you were trying to make, which is that AI is better than absolutely nothing; but I disagree in the vain that AI will give you a false since of companionship that might lead you further towards bad outcomes.

23. sarchertech ◴[] No.43954539{5}[source]
If you could prove that your LLM was only as likely to provide harmful responses as a therapist was to murder you, you might have a point.
24. sheepscreek ◴[] No.43958253[source]
You’re saying that as if AI is a singular thing. It is not.

Also, for every nay sayer I encounter now, I’m going to start by asking “Have you ever taken therapy? For how long? Why did you stop? Did it help?”

Therapy isn’t a silver bullet. Finding a therapist that works for you takes years of patient trial and error.

25. sheepscreek ◴[] No.43959057{6}[source]
For what it’s worth, in my personal experience, ChatGPT 4o, DeepSeek R1, and Grok 3, to an extent, are “smarter” about human behaviour than they are at producing code. There’s likely a lot going on behind the scenes to maintain continuity (so it produces content that’s pretty consistent, at least for me on behavioural discussions), especially with ChatGPT.

It’s been incredibly helpful for my personal use: brainstorming ideas, such as exploring how different scenarios might unfold. For instance, I can ask, “What are the pros and cons of choosing x over y, considering these factors?” or even, “I’m in a tough spot. X and I often argue about Z (provide some background context), and I’m struggling to express my perspective. I’m afraid…” You get the idea.

GPT-4o is remarkably good at putting things in an independent and unbiased third person perspective. It’s definitely not an echo chamber for me. More often, the insights are what I might have come up with if I were observing my own life from a distance.

Now some people have said “sure it’s like journaling”. I think it’s even better, like talking to your journal (ala Harry Potter, like Tom Riddle’s diary) with some level of fact checking (I’ve gotten called out) and human behavioural understanding available at your disposable.

replies(1): >>43962185 #
26. sarchertech ◴[] No.43962185{7}[source]
I’m sure they can be very useful for things like that. Provided the user has a sophisticated understanding of the technology as you clearly do. That’s not the same as selling chat bots to vulnerable naive people who think they are talking to an intelligent “therapist”.

And it’s not just code where they go off the rails. If you talk to them for a while they will very frequently end up agreeing with you if you want them to.

I’ve seen this many times when using an LLM to try to learn something new or refresh my memory.