> And if you believe nothing ChatGPT says then you simply suffer from the opposite bias.
No, you're making a logical error. It's not that I believe the opposite of what ChatGPT says, it's that I don't believe what ChatGPT says based on ChatGPT saying it alone. If ChatGPT says your algorithm is right, it's not that I automatically believe your algorithm is wrong, it's that I believe I still don't know it's right because ChatGPT is shit at being right.
> The dilemma is that it is mostly right, most of the time, isn't it? Almost like... people?
I mean, yeah. Which is why I don't believe things just because some random person says them, either.
I tend to assign confidence intervals to beliefs in my head, and even and expert saying something doesn't make me super confident in it. I tend to believe things strongly if I can understand the hard evidence for them, not because any person, even an expert, says it.
With security, we have tens of thousands of examples of unproven security assertions being proven wrong later with disastrous effect, so the standard of proof is very high, and ChatGPT saying it is absolutely nowhere near close enough to proof. The only acceptable proof with a security algorithm is mathematical proof.
> If ChatGPT says things that are empirically provable, then it's not wrong, is it?
ChatGPT also says things that are empirically disprovable, so basically you always have to go to empirical proof to tell whether what ChatGPT has said is true, making ChatGPT next-to-useless if accuracy is at all a concern. Just go to empirical proof and skip the pointless ChatGPT step.
And accuracy really, really is a very big concern in secure random shuffling algorithms, so you really really need to empirically prove that they work which you really really have not done because you stopped thinking when you trusted ChatGPT. Your shuffling algorithm is not proven, and very likely is not secure.