←back to thread

165 points distalx | 1 comments | | HN request time: 0s | source
Show context
ilaksh[dead post] ◴[] No.43948635[source]
[flagged]
simplyinfinity ◴[] No.43948902[source]
Even today, leading LLMS Claude 3.7 and ChatGPT 4, take your questions as "you've made mistake, fix it" instead of answering the question. People consider a much broader context of the situation, your body language, facial expressions, and can come up with unusual solutions to specific situations and can explore vastly more things than an LLM.

And the thing when it comes to therapy is, a real therapist doesn't have to be prompted and can auto adjust to you without your explicit say so. They're not overly affirming, can stop you from doing things and say no to you. LLMs are the opposite of that.

Also, as a lay person how do i know the right prompts for <llm of the week> to work correctly?

Don't get me wrong, i would love for AI to be on par or better than a real life therapist, but we're not there yet, and i would advise everyone against using AI for therapy.

replies(2): >>43949105 #>>43949214 #
1. sho_hn ◴[] No.43949214{3}[source]
Even if the tech was there, for appropriate medical use those models would also have to be strenously tested and certified, so that a known-good version is in use. Cf. the recent "personality" changes in a ChatGPT upgrade. Right now, none of these tools is regulated sufficiently to set safe standards there.