←back to thread

165 points distalx | 2 comments | | HN request time: 0s | source
Show context
sheepscreek ◴[] No.43949463[source]
That’s fair but there’s another nuance that they can’t solve for. Cost and availability.

AI is not a substitute for traditional therapy, but it offers an 80% benefit at a fraction of the cost. It could be used to supplement therapy, for the periods between sessions.

The biggest risk is with privacy. Meta could not be trusted knowing what you’re going to wear or eat. Now imagine them knowing your deepest darkest secrets. The advertising business model does not gel well with providing mental health support. Subscription (with privacy guarantees) is the way to go.

replies(5): >>43949589 #>>43949591 #>>43950064 #>>43950278 #>>43950547 #
sarchertech ◴[] No.43949589[source]
Does it offer 80% of the benefit? An AI could match what a human therapist would say 80% (or 99%) of the time and still provide negative benefit.

Therapy seems like the last place an LLM would be beneficial because it’s very hard to keep an LLM from telling you what you want to hear. I can see anyway you could guarantee that a chatbot cause severe damage to a vulnerable patient by supporting their neurosis.

We’re not anywhere close to an LLM which is trained to be supportive and understanding in tone but will never affirm your irrational fears, insecurities, and delusions.

replies(2): >>43949648 #>>43949858 #
singpolyma3 ◴[] No.43949858{3}[source]
I mean most forms of professional therapy the therapist shouldn't say much at all and certainly shouldn't give advice. The point is to have someone listen in a way that feels like they are really listening
replies(2): >>43950037 #>>43950125 #
sarchertech ◴[] No.43950037{4}[source]
Therapists don’t give advice in that they won’t tell you whether you should quit your job, or should you propose to your girlfriend. They will definitely give you basic guidance and confirm that your fears are overblown.

They will not under any circumstances tell you that “yes you are correct, Billy would be more likely to love you if you drop 30 more pounds by throwing up after eating”, but an LLM will if it goes off script.

replies(2): >>43950704 #>>43952329 #
sheepscreek ◴[] No.43950704{5}[source]
You can create an LLM to keep a check on the LLM interacting with people. This is basically what all “safety” etc models do - they work as gatekeepers for the more powerful model.

This is an implementation problem and not really a technical limitation. If anything, by focusing on a particular domain (like therapy), the do’s and don’ts become more clear.

replies(1): >>43951302 #
sarchertech ◴[] No.43951302{6}[source]
Sure you might be able to do that. Or it could turn out that the amount of harmful responses are so varied that trying to block all of them makes the therapy AI useless.

There is a very fine line between being understanding and supportive and enabling bad behavior. I’m not confident that a team of LLMs is going to be able to walk that line consistently anytime soon.

We can’t even get code generating LLMs to stop hallucinating APIs and code is a much narrower domain than therapy.

replies(1): >>43959057 #
1. sheepscreek ◴[] No.43959057{7}[source]
For what it’s worth, in my personal experience, ChatGPT 4o, DeepSeek R1, and Grok 3, to an extent, are “smarter” about human behaviour than they are at producing code. There’s likely a lot going on behind the scenes to maintain continuity (so it produces content that’s pretty consistent, at least for me on behavioural discussions), especially with ChatGPT.

It’s been incredibly helpful for my personal use: brainstorming ideas, such as exploring how different scenarios might unfold. For instance, I can ask, “What are the pros and cons of choosing x over y, considering these factors?” or even, “I’m in a tough spot. X and I often argue about Z (provide some background context), and I’m struggling to express my perspective. I’m afraid…” You get the idea.

GPT-4o is remarkably good at putting things in an independent and unbiased third person perspective. It’s definitely not an echo chamber for me. More often, the insights are what I might have come up with if I were observing my own life from a distance.

Now some people have said “sure it’s like journaling”. I think it’s even better, like talking to your journal (ala Harry Potter, like Tom Riddle’s diary) with some level of fact checking (I’ve gotten called out) and human behavioural understanding available at your disposable.

replies(1): >>43962185 #
2. sarchertech ◴[] No.43962185[source]
I’m sure they can be very useful for things like that. Provided the user has a sophisticated understanding of the technology as you clearly do. That’s not the same as selling chat bots to vulnerable naive people who think they are talking to an intelligent “therapist”.

And it’s not just code where they go off the rails. If you talk to them for a while they will very frequently end up agreeing with you if you want them to.

I’ve seen this many times when using an LLM to try to learn something new or refresh my memory.