I've been using LLMs almost every day for the past year. They're definitely helpful for small tasks, but in real, complex projects, reviewing and fixing their output can sometimes take more time than just writing the code myself.
We probably need a bit less wishful thinking. Blindly trusting what the AI suggests tends to backfire. The real challenge is figuring out where it actually helps, and where it quietly gets in the way.