We're talking about software development here, not misinformation about politics or something.
Software is incredibly easy to verify compared to other domains. First, my own expertise can pick up most mistakes during review. Second, all of the automated linting, unit testing, integration testing, and manual testing is near guaranteed to pick up a problem with the functionality being wrong.
So, how exactly do you think AI is going to trick me when I'm asking it to write a new migration to add a new table, link that into a model, and expose that in an API? I have done each of these things 100 times. It is so obvious to me when it makes a mistake, because this process is so routine. So how exactly is AI going to trick me? It's an absurd notion.
AI does have risks with people being lulled into a false sense of security. But that is a concern in areas like getting it to explain how a codebase works for you, or getting it to try to teach you about technologies. Then you can end up with a false idea about how something works. But in software development itself? When I already have worked with all of these tools for years? It just isn't a big issue. And the benefits far outweigh it occasionally telling me that an API exists that actually doesn't exist, which I will realise almost immediately when the code fails to run.
People who dismiss AI because it makes mistakes are tiresome. The lack of reliability of LLMs is just another constraint to engineer around. It's not magic.