It's like a very very big fat stack of zero days leaking to the public. Sure, they'll all get fixed eventually, and everyone will update, eventually. But until that happens, the usual suspects are going to have a field day.
It may come to favor defense in the long term. But it's AGI. If that tech lands, the "long term" may not exist.
Defender needs to get everything right, attacker needs to get one thing right.
The same way we can build "muscle memory" to delegate simple autonomous tasks, a super intelligence might be able to dynamically delegate to human level (or greater) level sub intelligences to vigilantly watch everything it needs to.
One of the most intuitive pathway to ASI is that AGI eventually gets incredibly good at improving AGI. And a system like this would be able to craft and direct stripped down AI subsystems.