The problem with AI generated code is there’s no unifying theory behind the change. When a human writes code and one part of the system looks weird or different, there’s usually a reason - by digging in on the underlying system, I can usually figure out what they were going for or why they did something in a particular way. I can only provide useful feedback or alternatives if I can figure out why something was done, though.
LLM-generating code has no unifying theory behind it - every line may as well have been written by a different person, so you get an utterly insane looking codebase with no constant thread tying it together and no reason why. It’s like trying to figure out what the fuck is happening in a legacy codebase, except it’s brand new. I’ve wasted hours trying to understand someone’s MR, only to realize it’s vibe code and there’s no reason for any of it.