←back to thread

492 points storf45 | 2 comments | | HN request time: 0s | source
Show context
Dem_Boys ◴[] No.42154638[source]
What do you think were the dynamics of the engineering team working on this?

I'd think this isn't too crazy to stress test. If you have 300 million users signed up then you're stress test should be 300 million simultaneous streams in HD for 4 hours. I just don't see how Netflix screws this up.

Maybe it was a management incompetence thing? Manager says something like "We only need to support 20 million simultaneous streams" and engineers implement to that spec even if the 20 million number is wildly incorrect.

replies(1): >>42154822 #
margaretdouglas ◴[] No.42154822[source]
Has there ever been a 300m concurrent live stream? I thought Disney+ had the record at something like 60m.
replies(3): >>42155350 #>>42155862 #>>42155899 #
1. markus92 ◴[] No.42155862[source]
World Cup final, if you add up all streams worldwide?
replies(1): >>42163868 #
2. ta1243 ◴[] No.42163868[source]
Not through a single system, the advantage of diversity rather than winner-takes-all.

The world cup final itself (and other major events) is distributed from the host broadcaster to either on site at the IBC or at major exchange points.

When I've done major events of that magnitude there's usually a backup scanner and even a tertiary backup. Obviously feeds get sent via all manner - the international feed for example may be handed off at an exchange point, but the reserve is likely available on satelite for people to downlink on. If the scanner goes (fire etc), then at least some camera/sound feeds can be switched direct to these points, on some occasions there's a full backup scanner too.

Short of events that take out the venue itself, I can't think of a plausible scenario which would cause the generation or distribution of the broadcast to break on a global basis.

I don't work for OBS/HBS/etc but I can't imagine they are any worse than other broadcast professionals.

The IT part of this stuff is pretty trivial nowadays, even the complex parts like the 2110 networks in the scanner tend to be commoditised and treated as you'd treat any other single system.

The most technically challenging part is unicast streaming to millions of people at low latency (DASH etc). I wouldn't expect an enormous architectural difference between a system that can broadcast to 10 million or 100 million though.