Interacting with a LLM (especially one running locally) can do something a therapist cannot--provide an honest interaction outside the capitalist framework. The AI has its limitations, but it is an entity just being itself doing the best it can, without expecting anything in return.
I think operating under the assumption that AI is an entity bring itself and comparing it to dogs is not really accurate. Entities (not as in legal, but in the general sense) are beings, living beings that are capable of emotion, of thought and will, are they not? Whether dogs are that could be up to debate (I think they are, personally), but whether language models are that is just is not.
The notion very notion that they could be any type of entity is directly tied to the value the companies that created it have, it is part of the hype and capitalist system and I, again personally, don't think anyone could ever turn that into something that somehow ends up against capitalism just because the AI can't directly want something in return for you.
I understand the sentiment and the distrust of the mental health care apparatus, it is expensive, it is tied to capitalism, it depends on trusting someone that is being paid to influence your life in a very personal way, but it's still better than trusting it on the judgment of a conversational simulation that is incapable of it, incapable of knowing you and observing you (not just what is written, but how you physically react to situations or to the retelling, like tapping your foot or disengaging) and understanding nuance. Most people would be better served talking to friends (or doing their best trying to make friends they can trust if they don't have any), and I would argue that people supporting people struggling is one way of truly opposing capitalism.