LLMs augment the input with their trained data. LLMs don't inherently agree if you set up context correctly for analysis.
I've arrived at the conclusion that the top-down push without adequate upskilling creates bad experiences and subpar results. It's like adopting a new methodology for something without actually training anyone on the new methodology, it leaves everyone scrambling trying to figure it out often with poor results.
I find LLMs to be a great multiplier. But that multiplier will take whatever you put in context. If one puts in bias and/or fragmented mess, it's far more difficult to steer the context to correct it than it was to add it to begin with.