Not through a single system, the advantage of diversity rather than winner-takes-all.
The world cup final itself (and other major events) is distributed from the host broadcaster to either on site at the IBC or at major exchange points.
When I've done major events of that magnitude there's usually a backup scanner and even a tertiary backup. Obviously feeds get sent via all manner - the international feed for example may be handed off at an exchange point, but the reserve is likely available on satelite for people to downlink on. If the scanner goes (fire etc), then at least some camera/sound feeds can be switched direct to these points, on some occasions there's a full backup scanner too.
Short of events that take out the venue itself, I can't think of a plausible scenario which would cause the generation or distribution of the broadcast to break on a global basis.
I don't work for OBS/HBS/etc but I can't imagine they are any worse than other broadcast professionals.
The IT part of this stuff is pretty trivial nowadays, even the complex parts like the 2110 networks in the scanner tend to be commoditised and treated as you'd treat any other single system.
The most technically challenging part is unicast streaming to millions of people at low latency (DASH etc). I wouldn't expect an enormous architectural difference between a system that can broadcast to 10 million or 100 million though.