Most active commenters
  • hy555(5)
  • neilv(3)
  • (3)

←back to thread

165 points distalx | 30 comments | | HN request time: 0.001s | source | bottom
1. hy555 ◴[] No.43947151[source]
Throwaway account. My ex partner was involved in a study which said these things were not ok. They were paid not to publish by an undisclosed party. That's how bad it has got.

Edit: the study compared therapist outcomes to AI outcomes to placebo outcomes. Therapists in this field performed slightly better than placebo, which is pretty terrible. The AI outcomes performed much worse than placebo which is very terrible.

replies(9): >>43947337 #>>43947376 #>>43947393 #>>43948478 #>>43948522 #>>43948984 #>>43949480 #>>43949609 #>>43950587 #
2. cube00 ◴[] No.43947337[source]
The amount of free money sloshing around the AI space is ridiculous at the moment.
3. sorenjan ◴[] No.43947376[source]
What did they use for placebo? Talking to somebody without education, or not talking to anybody at all?
replies(1): >>43947498 #
4. neilv ◴[] No.43947393[source]
Sounds like suppressing research, at the cost of public health/safety.

Some people knew what the tobacco companies were secretly doing, yet they kept quiet, and let countless family tragedies happen.

What are best channels for people with info to help halt the corruption, this time?

(The channels might be different than usual right now, with much of US federal being disrupted.)

replies(1): >>43947496 #
5. hy555 ◴[] No.43947496[source]
Start digging into psychotherapy research and tearing their papers apart. Then the SPR. Whole thing is corrupt to the core. A lot of papers drive public health policy outside the field as it's so vague and easy to cite but the research is only fit for retraction watch.
replies(1): >>43947550 #
6. hy555 ◴[] No.43947498[source]
Not talking to anyone at all.
replies(2): >>43947566 #>>43947681 #
7. neilv ◴[] No.43947550{3}[source]
Being paid to suppress research on health/safety is potentially a different problem than, say, a high rate of irreproducible results.

And if the alleged payer is outside the field, this might also be relevant to the public interest in other regards. (For example, if they're trying to suppress this, what else are they trying to do. Even if it turns out the research is invalid.)

replies(2): >>43947842 #>>43948013 #
8. zargon ◴[] No.43947566{3}[source]
What did they do then? If they didn't do anything, how can it be considered a placebo?
replies(2): >>43947601 #>>43947975 #
9. risyachka ◴[] No.43947601{4}[source]
Does it matter? The point is AI made it worse.
10. trod1234 ◴[] No.43947681{3}[source]
That seems like a very poor control group.
replies(1): >>43947781 #
11. hy555 ◴[] No.43947781{4}[source]
That is one of my concerns.
12. hy555 ◴[] No.43947842{4}[source]
Both are a problem. I should not conflate the two.

I agree. Asking questions which are normal in my own field resulted in stonewalling and obvious distress. The worst thing being this leading to the end of what was a good relationship.

replies(1): >>43948006 #
13. phren0logy ◴[] No.43947975{4}[source]
It's called a "waitlist" control group, and it's not intended to represent placebo. Or at least, it shouldn't be billed that way. It's not an ideal study design, but it's common enough that you could use it to compare one therapy to another based on their results vs a waitlist control. Placebo control for psychotherapy is tricky and more expensive, and can be hard to get the funding to do it properly.
replies(1): >>43948461 #
14. neilv ◴[] No.43948006{5}[source]
If the allegation is true, hopefully your friend speaks up.

If not, you might consider whether you have actionable information yourself, any professional obligations you have (e.g., if you work in science/health/safety yourself), any societal obligations, whether reporting the allegation would be betraying a trust, and what the calculus is there.

15. cjbgkagh ◴[] No.43948013{4}[source]
I figured it would be related in that it's a form of p-hacking. Do 20 studies, one gives you the 'statistically significant' results you want, suppress the other 19. Then 100% of published studies support what you want. Could be combined with p-hacking within the studies to compound the effect.
replies(1): >>43950325 #
16. ◴[] No.43948461{5}[source]
17. scotty79 ◴[] No.43948478[source]
I've heard of some more modern research with llms that had a result that Ai therapist was straight up better than human therapists across all measures.
18. ilaksh ◴[] No.43948522[source]
Which model exactly? What type of therapy/prompt? Was it a completely dated model, like in the article where they talk about a model from two years ago? We have had massive progress in two years.
replies(1): >>43948862 #
19. raverbashing ◴[] No.43948862[source]
Honestly none of the companies are tuning their model to be better at therapy.

Also it is not expected that the training material for the model deals with the actual practical aspects of therapy, only some of the theoretical aspects are probably in that material

replies(2): >>43949136 #>>43949482 #
20. ◴[] No.43948984[source]
21. ilaksh ◴[] No.43949136{3}[source]
The leading edge models are trainable via instructions. That's why agents are possible. Many online therapy or therapy companies are training or instructing their agents in this domain.
replies(1): >>43952313 #
22. ◴[] No.43949480[source]
23. jdietrich ◴[] No.43949482{3}[source]
>none of the companies are tuning their model to be better at therapy

BrickLabs have developed an expert-fine-tuned model specifically to provide psychotherapy. Their model has shown modestly positive results in a reasonably large preregistered RCT.

https://trytherabot.com/

https://ai.nejm.org/doi/full/10.1056/AIoa2400802

replies(1): >>43952032 #
24. rsynnott ◴[] No.43949609[source]
I'm quite curious how the placebo in a study like this works.
replies(1): >>43952940 #
25. genewitch ◴[] No.43950325{5}[source]
97% of all scientists named steve agree that global warming is happening!
26. twobitshifter ◴[] No.43950587[source]
They should do ELIZA as the control or at least include it to see how far we have or haven’t advanced.
replies(1): >>43957698 #
27. raverbashing ◴[] No.43952032{4}[source]
Yeah but 99% of people trying "AI mental health" are using free ChatGPT, etc
28. ktallett ◴[] No.43952313{4}[source]
That still wouldn't allow for edge/unusual cases and situations which I would say having experienced group therapy for many years, most significant therapy users have quite a few of.
29. derbOac ◴[] No.43952940[source]
Usually in psychotherapy controls, there's either:

waitlist control, where people get nothing

psychoeducational, where people get some kind of educational content about mental health but not therapy

existing nonpsychological service, like physical checkups with a nurse

existing therapy, so not placebo but current treatment

pharmacological placebo, where they're given a placebo pill and told its psychiatric medication for their concern

A kind of "nerfed" version of the therapy, such as supportive therapy where the clinician just provides empathy etc but nothing else

How to interpret results depends on the control.

It's relevant to debates about general vs specific effects in therapy (rapport, empathy, fit) versus specific effects (effects due to specific techniques of a specific therapy).

Bruce Wampold has written a lot about types of controls although he has a hard nonspecific/general effects take on therapy.

30. kbelder ◴[] No.43957698[source]
Or a normal person with no training in therapy?