It's basically the wetware equivalent of page thrashing.
My experience is that I write better code faster by turning off the AI assistants and trying to configure the IDE to as best possible produce deterministic and fast suggestions, that way they become a rapid shorthand. This makes for a fast way of writing code that doesn't lead to mental model thrashing, since the model can be updated incrementally as I go.
The exception is using LLMs to straight up generate a prototype that can be refined. That also works pretty well, and largely avoids the expensive exchanges of information back and forth between human and machine.