Also, I have never seen any Netflix employees who are arrogant or who think they are superior to other people. What I have seen is Netflix's engineering organization frequently describes the technical challenges they face and discusses how they solve them.
A typical streaming architecture is multi-tiered caches, source->midtier->edge.
We don't know what happened but it's possible they ran out of capacity on their edge (or anywhere else).
> Even though widely used, this pattern has some significant drawbacks, the best illustration being the major incident that hit the BBC during the 2018 World Cup quarter-final. Our routing component experienced a temporary wobble which had a knock-on effect and caused the CDN to fail to pull one piece of media content from our packager on time. The CDN increased its request load as part of its retry strategy, making the problem worse, and eventually disabled its internal caches, meaning that instead of collapsing player requests, it started forwarding millions of them directly to our packager. It wasn’t designed to serve several terabits of video data every second and was completely overwhelmed. Although we used more than one CDN, they all connected to the same packager servers, which led to us also being unable to serve the other CDNs. A couple of minutes into extra time, all our streams went down, and angry football fans were cursing the BBC across the country.
https://www.bbc.co.uk/webarchive/https%3A%2F%2Fwww.bbc.co.uk...
I worked in this space. All these potential failure modes and how they're mitigates is something that we paid a fair amount of attention to.