And the thing when it comes to therapy is, a real therapist doesn't have to be prompted and can auto adjust to you without your explicit say so. They're not overly affirming, can stop you from doing things and say no to you. LLMs are the opposite of that.
Also, as a lay person how do i know the right prompts for <llm of the week> to work correctly?
Don't get me wrong, i would love for AI to be on par or better than a real life therapist, but we're not there yet, and i would advise everyone against using AI for therapy.
I am very experienced with creating prompts and agents, and good at research, and I believe that my agent along with the journaling tool would be more effective than many "average" human therapists.
It seems effective in dealing with my own issues.
Obviously I am biased.
> research aided by my agent Also not good enough.
As an example: Yesterday i asked Claude and ChatGPT to design a circuitry that monitors pulses form S0 power meter interface. It designed a circuit that didn't have any external power to the circuit. When asked it said "ah yes, let me add that" and proceeded to confuse itself and add stuff that are not needed, but are explained and sounds reasonable if you don't know anything. After numerous attempts it didn't produce any working design.
So how can you verify that the therapist agent you've built will work with something as complex as humans, when it can't even do basic circuitry with known laws of physics and spec & data sheets of no more than 10 components?