i suppose, gradually and the suddenly?
each "fix" to incorrect reasoning/solution doesn't just solve the current instance, it also ends up in a rule-based system that will be used in future
initially, being in the loop is necessary, once you find yourself "just approving" you can be relaxed and think back
or, more likely, initially you need fine-grained tasks; as reliability grows, tasks can become more complex
"parallelizing" allows single (sub)agents with ad-hoc responsibilities to rely on separate "institutionalized" context/rules, .ie: architecture-agent and coder-agent can talk to each others and solve a decision-conflict based on wether one is making the decision based on concrete rules you have added, or hallucinating decisions
i have seen a friend build a rule based system and have been impressed at how well LLM work within that context