←back to thread

492 points storf45 | 4 comments | | HN request time: 1.022s | source
Show context
nikolay ◴[] No.42159649[source]
The arrogant Netflix! They always brag about how technologically superior they are, and they can't handle a simple technological challenge! I didn't have a buffering issue, I had an error page - for hours! Yet, they kept advertising the boxing match to me! What a joke! If you can't stream it, don't advertise it to save face with people like me who don't care about boxing!
replies(2): >>42159686 #>>42159731 #
1. notimetorelax ◴[] No.42159686[source]
I think you’re oversimplifying it. Live event streaming is very different from movie streaming. All those edge cache servers become kinda useless and you start hitting peering bottlenecks.
replies(1): >>42159800 #
2. YZF ◴[] No.42159800[source]
Edge caches are not useless for live streaming. They're critical. The upstream from those caches has no way of handling each individual users. The stream needs to hit the edge cache and end users should be served from there.

A typical streaming architecture is multi-tiered caches, source->midtier->edge.

We don't know what happened but it's possible they ran out of capacity on their edge (or anywhere else).

replies(1): >>42164114 #
3. ta1243 ◴[] No.42164114[source]
BBC had a similar issue in a live stream 5 years ago where events conspired and a CDN "failed open", which effectively DOSsed the entire output via all CDNs

> Even though widely used, this pattern has some significant drawbacks, the best illustration being the major incident that hit the BBC during the 2018 World Cup quarter-final. Our routing component experienced a temporary wobble which had a knock-on effect and caused the CDN to fail to pull one piece of media content from our packager on time. The CDN increased its request load as part of its retry strategy, making the problem worse, and eventually disabled its internal caches, meaning that instead of collapsing player requests, it started forwarding millions of them directly to our packager. It wasn’t designed to serve several terabits of video data every second and was completely overwhelmed. Although we used more than one CDN, they all connected to the same packager servers, which led to us also being unable to serve the other CDNs. A couple of minutes into extra time, all our streams went down, and angry football fans were cursing the BBC across the country.

https://www.bbc.co.uk/webarchive/https%3A%2F%2Fwww.bbc.co.uk...

replies(1): >>42166146 #
4. YZF ◴[] No.42166146{3}[source]
This feels like a bug in the implementation and not really a drawback of the pattern. "Routing component experienced a temporary wobble" also sounds like bug of sorts.

I worked in this space. All these potential failure modes and how they're mitigates is something that we paid a fair amount of attention to.