> One of the consequences of this is that we should always consider asking the LLM the same question more than once, perhaps with some variation in the wording. Then we can compare answers, indeed perhaps ask the LLM to compare answers for us. The difference in the answers can be as useful as the answers themselves.
This is what LLM "reasoning" does. More than "reasoning" in the human sense, it just reduces variance from variations in the prompt and random next token prediction.