I’d disagree, though: humans are still easier to predict and understand (and trust) than AI, typically.
In this example, GPT-4o cannot tell that GitHub is spelled correctly:
https://app.gitsense.com/?doc=6c9bada92&model=GPT-4o&samples...
In this example, Claude cannot tell that GitHub is spelled correctly:
https://app.gitsense.com/?doc=905f4a9af74c25f&model=Claude+3...
I still believe LLM is a game changer and I'm currently working on what I call a "Yes/No" tool which I believe will make trusting LLMs a lot easier (for certain things of course). The basic idea is the "Yes/No" tool will let you combine models, samples and prompts to come to a Yes or No answer.
Based on what I've seen so far, a model can easily screw up, but it is unlikely that all will screw up at the same time.
But we have had extensive experience with humans, it is normal to have better defined trust, LLMs will be better understood as well. There is no central understander or truth, that is the interesting part, it's a "Blind men and the elephant" situation.