Almost every summary I have read through contains at least one glaring mistake, but if it's something I know nothing about, I could see how easy it would be to just trust it, since 95% of it seems true/accurate.
Trust, but verify is all the more relevant today. Except I would discount the trust, even.
In the programming domain I can at least run something and see it doesn't compile or work as I expect, but you can't verify that a written statement about someone/something is the correct interpretation without knowing the correct answer ahead of time. To muddy the waters further, things work just well enough on common knowledge that it's easy to believe it could be right about uncommon knowledge which you don't know how to verify. (Or else you wouldn't be asking it in the first place)
The same code out of an intern or junior programmer you can at least walk through their reasoning on a code review. Even better if they tend to learn and not make that same mistake again. LLMs will happily screw up randomly on every repeated prompt.
The hardest code you encounter is code written by someone else. You don't have the same mental model or memories as the original author. So you need to build all that context and then reason through the code. If an LLM is writing a lot of your code you're missing out on all the context you'd normally build writing it.