Even better, this can carry on for a few iterations. And both LLMs can be:
1. Budgeted ("don't exceed X amount")
2. Improved (another LLM can improve their prompts)
and so on. I think we are fixating on how _we_ do things, not how this new world will do their _own_ thing. That to me is the real danger.