On a similar note, has anyone found themselves absolutely not trusting non-code LLM output?
The code is at least testable and verifiable. For everything else I am left wondering if it's the truth or a hallucination. It incurs more mental burden that I was trying to avoid using LLM in the first place.
replies(7):