←back to thread

165 points distalx | 2 comments | | HN request time: 0.812s | source
Show context
ilaksh[dead post] ◴[] No.43948635[source]
[flagged]
1. thih9 ◴[] No.43949159[source]
> Leading LLMs in 2025 can absolutely do certain core aspects of cognitive behavioral therapy very effectively given the right prompts and framework and things like journaling tools for the user.

But when the situation gets more complex or simply a bit unexpected, would that model reliably recognize it lacks knowledge and escalate to a specialist? Or would it still hallucinate instead?

replies(1): >>43949190 #
2. ilaksh ◴[] No.43949190[source]
SOTA models can actually handle complexity. Most of the discussions I have had with my therapy agent do have a lot of layers. What they can't handle is someone who is mentally ill and may need medication or direct supervision. But they can absolutely recognize mental illness if it is evident in the text entered by the user and insist the user find a medical professional or help them search for one.