If all other things are equal and one LLM is consistently vaguely annoying, for whatever reason, and the other isn't, I chose the other one.
Leaving myself aside, LLMs are broadly available and strongly forced onto everyone for day-to-day use, including vulnerable and insecure groups. These groups should not adapt to the tool, the tool should adapt to the users.
I'm not GP but I agree that it isn't universal, nor especially healthy or productive, to have the response you describe to being told that your issue is common. It would make sense if you could e.g. hear the insincerity in a person's tone of voice, but Gemini outputs text and the concept of sincerity is irrelevant to a computer program.
Focusing on the informational content seems to me like a good idea, so as to avoid https://en.wikipedia.org/wiki/ELIZA_effect.
> it's also weird that the state of my own mental resilience should play any role at all when interacting with a tool.
When I was a university student, my own mental resilience was absolutely instrumental to deciphering gcc error messages.
> LLMs are broadly available and strongly forced onto everyone for day-to-day use
They say this kind of thing about cars and smartphones, too. Somehow I endure.