LLMs are decent at providing feedback and validation. I often have few sources of that from humans, and long periods without it are bad for my motivation and sense of well-being.
LLMs filling that hole is great if it's done in discrete and intermittent bumps. TFA shows the psychological risks of binging on artificial validation.
All things in moderation, especially LLMs.
replies(1):