AI should not be a goal in itself, unless you make and sell AI. But for anyone else, you need to stick to your original quality and productivity metrics. If AI can help you improve those, that's great. But don't make AI use itself a goal.
I've got a coworker who complains she's getting pressured by management to use AI to write the documents she writes. She already uses AI to review them, and that works great, according to her. But they want her to use AI to write the whole thing, and she refuses, because the writing process is also how she organizes her own thinking around the content she's writing. If she does that, she's not building her own mental model of the processes she's describing, and soon she'd have no idea of what's going on anymore.
People ignore the importance of such mental models a lot. I recall a story of air traffic control that was automated, leading air traffic controllers to lose track in their heads of which plane was where. So they changed the system so they still had to manually move planes from one zone to another in an otherwise automated system, just to keep their mental models intact.
It's also how I utilties AIs - summaries or rewrite text to make it sound better but never to create code or understand code. Nothing that requires deep understanding of the problem space.
Its the mental models in my head that don't jell with AI that prevent AI adoption for me.