←back to thread

443 points jaredwiener | 1 comments | | HN request time: 0.204s | source
Show context
lvl155 ◴[] No.45032596[source]
Clearly ChatGPT should not be used for this purpose but I will say this industry (counseling) is also deeply flawed. They are also mis-incentivized in many parts of the world. And if ChatGPT is basing its interactions on the same scripted contents these “professionals” use, that’s just not right.

I really wish people in AI space stop the nonsense and communicate more clearly what these LLMs are designed to do. They’re not some magical AGI. They’re token prediction machines. That’s literally how they should frame it so gen pop knows exactly what they’re getting.

replies(5): >>45032792 #>>45032867 #>>45036104 #>>45037595 #>>45041668 #
1. ascorbic ◴[] No.45036104[source]
I'm not sure how you can blame counselors when no counselor would have said any of the things that were a problem here. The issue here wasn't that there was examples of counselors in the training data giving practical instructions on suicide – the problem was the well known tendency for LLMs to lose their guardrails too easily and revert to RLHF-derived people pleasing, particularly in long conversations.