I think the difference between situations where AI-driven development works and doesn't is going to be largely down to the quality of the engineers who are supervising and prompting to generate that code, and the degree to which they manually evaluate it before moving it forward. I think you'll find that good engineers who understand what they're telling an agent to do are still extremely valuable, and are unlikely to go anywhere in the short to mid term. AI tools are not yet at the point where they are reliable on their own, even for systems they helped build, and it's unclear whether they will be any time soon purely through model scaling (though it's possible).
I think you can see the realities of AI tooling in the fact that the major AI companies are hiring lots and lots of engineers, not just for AI-related positions, but for all sorts of general engineering positions. For example, here's a post for a backend engineer at OpenAI: https://openai.com/careers/backend-software-engineer-leverag... - and one from Anthropic: https://job-boards.greenhouse.io/anthropic/jobs/4561280008.
Note that neither of these require direct experience with using AI coding agents, just an interest in the topic! Contrast that with many companies who now demand engineers explain how they are using AI-driven workflows. When they are being serious about getting people to do the work that will make them money, rather than engaging in marketing hype, AI companies are honest: AI agents are tools, just like IDEs, version control systems, etc. It's up to the wise engineer to use them in a valuable way.
Is it possible they're just hiring these folks to try and make their models better to later replace those people? It's possible. But I'm not sure when in time, if ever, they'll reach the point where that was viable.