Arguably this may change in the far distant future if we ever build something of significantly greater intelligence, or just capability, than a human, but today's AI is struggling to draw clock faces, so not quite there yet...
The thing with automation is that it can be scaled, which I would say favors the attacker, at least at this stage of the arms race - they can launch thousands of hacking/vulnerability attacks against thousands of targets, looking for that one chink in the armor.
I suppose the defenders could do the exact same thing though - use this kind of automation to find their own vulnerabilities before the bad guys do. Not every corporation, and probably extremely few, would have the skills to do this though, so one could imagine some government group (part of DHS?) set up to probe security/vulnerability of US companies, requiring opt-in from the companies perhaps?
Criminal organizations take a different approach, much like spammers where they can purchase/rent c2 and other software for mass exploitation (eg ransomware). This stuff is usually very professionally coded and highly effective.
Botnets, hosting in various countries out of reach of western authorities, etc are all common tactics as well.
It's like a very very big fat stack of zero days leaking to the public. Sure, they'll all get fixed eventually, and everyone will update, eventually. But until that happens, the usual suspects are going to have a field day.
It may come to favor defense in the long term. But it's AGI. If that tech lands, the "long term" may not exist.
Groups which were too unprofitable to target before, are now profitable.
Defender needs to get everything right, attacker needs to get one thing right.
On average, today's systems are much more secure than those from year 2005. Because the known vulns from those days got patched, and methodologies improved enough that they weren't replaced by newer vulns 1:1.
This is what allows defenders to keep up with the attackers long term. My concern is that AGI is the kind of thing that may result in no "long term".
The same way we can build "muscle memory" to delegate simple autonomous tasks, a super intelligence might be able to dynamically delegate to human level (or greater) level sub intelligences to vigilantly watch everything it needs to.
One of the most intuitive pathway to ASI is that AGI eventually gets incredibly good at improving AGI. And a system like this would be able to craft and direct stripped down AI subsystems.