←back to thread

492 points storf45 | 3 comments | | HN request time: 0.601s | source
Show context
dylan604 ◴[] No.42157048[source]
People just do not appreciate how many gotchas can pop up doing anything live. Sure, Netflix might have a great CDN that works great for their canned content and I could see how they might have assumed that's the hardest part.

Live has changed over the years from large satellite dishes beaming to a geosat and back down to the broadcast center($$$$$), to microwave to a more local broadcast center($$$$), to running dedicated fiber long haul back to a broadcast center($$$), to having a kit with multiple cell providers pushing a signal back to a broadcast center($$), to having a direct internet connection to a server accepting a live http stream($).

I'd be curious to know what their live plan was and what their redundant plan was.

replies(6): >>42157110 #>>42157117 #>>42157164 #>>42159101 #>>42159285 #>>42159954 #
diggan ◴[] No.42157110[source]
> People just do not appreciate how many gotchas can pop up doing anything live.

Sure thing, but also, how much resources do you think Netflix threw on this event? If organizations like FOSSDEM and CCC can do live events (although with way smaller viewership) across the globe without major hiccups on (relatively) tiny budgets and smaller infrastructure overall, how could Netflix not?

replies(4): >>42157143 #>>42157462 #>>42157501 #>>42158967 #
phyrex ◴[] No.42157143[source]
Scale changes everything, I don't think it's fair to shrug this off
replies(3): >>42157225 #>>42157233 #>>42158828 #
tiluha ◴[] No.42157225[source]
This is true, but scale comes after production. Once you have the video encoded on a server with a stable connection the hard part is over. What netflix failed to do is spread the files to enough servers around the globe to handle the load. I'm surprised they were unable(?) to use their network of edge servers to handle the live stream. Just run the stream with a 10 second delay and in that time push the stream segments to the edge server
replies(2): >>42157401 #>>42158664 #
1. dylan604 ◴[] No.42157401[source]
This right here is where I'd expect the failure to occur. This isn't Joey Beercan running OBS using their home internet connectivity.

This is a major broadcast. I'd expect a full on broadcast truck/trailer. If they were attempting to broadcast this with the ($) option directly to a server from onsite, then I would demand my money back. Broadcasting a live IP signal just falls on its face so many times it's only the cheap bastard option. Get the video signal as a video signal away from the live location to a facility with stable redundant networking.

This is the kind of thinking someone only familiar with computers/software/networking would think of rather than someone in broadcasting. It's nice to think about disrupting, but this is the kind of failure that disruptors never think about. Broadcasters have been there done that with ensuring live broadcasts don't go down because an internet connection wasn't able to keep up.

replies(1): >>42158889 #
2. shrubble ◴[] No.42158889[source]
Lumen has their Vyvvyx product/service which uses fiber for broadcast television.
replies(1): >>42159508 #
3. chgs ◴[] No.42159508[source]
I’ve been using vyvx since it was called global crossing/genesis, it was fairly unique when it started, but point to point ip distributon of programs has been the norm for at least 15 years. Still have backup paths on major events on a different technology, you’d be surprised how common a dual failure on two paths can be. For example output from the euro football this summer my mai paths were on a couple of leased lines with -7, but still had a backup on some local internet into a different city just incase there was a meltdown of the main providers network (it’s happened before with ipath, automation is great until it isn’t)