I am not so sure. Code by one LLM can be reviewed by another. Puppeteer like solutions will exist pretty soon. "Given this change, can you confirm this spec".
Even better, this can carry on for a few iterations. And both LLMs can be:
1. Budgeted ("don't exceed X amount")
2. Improved (another LLM can improve their prompts)
and so on. I think we are fixating on how _we_ do things, not how this new world will do their _own_ thing. That to me is the real danger.
replies(2):