Cheaper and more straightforward.
Their discussion of fragmentation shows they are clueless as to the details of the stack. All that shit is basically irrelevant.
Cheaper and more straightforward.
Their discussion of fragmentation shows they are clueless as to the details of the stack. All that shit is basically irrelevant.
They're capturing video from inside a Chromium process. How exactly do you expect to send the raw captured video frames to hls?
Are you proposing implementing the HLS server inside a web process?
Since it's coming from a headless process, they can just pipe it into ffmpeg, which is probably what they're using on the back-end anyway. Send the output to a file, then copy those to s3 as they're generated. And you can drop the frame rate and bitrate on that while you're at it, saving time and latency.
It's really not rocket science. You just have to understand your problem domain more better.
Shipping uncompressed video around is ridiculous, unless you're doing video editing. And even then you should use low-res copies and just push around EDLs until you need to render (unless you need high-res to see something).
Given that they're doing all that work, they might as well try to get an HLS encoder running in chrome. There just was an mp3 codec in web assembly on HN, so an HLS live encoder may not be too hard. I mean, if they were blowing a million because of their bad design they could blow another million building a browser-based HLS encoder.