←back to thread

362 points tosh | 1 comments | | HN request time: 0.218s | source
Show context
h4ck_th3_pl4n3t ◴[] No.42073457[source]
The problem is not on network level.

The problem is that the developers behind this way of streaming video data seem to have no idea of how video codecs work.

If they are in control of the headless chromium instances, the video streams, and the receiving backend of that video stream...why not simply use RDP or a similar video streaming protocol that is made exactly for this purpose?

This whole post reads like an article from a web dev that is totally over their head, trying to implement something that they didn't take the time to even think about. Arguing with TCP fragmentation when that is not even an issue, and trying to use a TCP stream when that is literally the worst thing you can do in that situation because of roundtrip costs.

But I guess that there is no JS API for that, so it's outside the development scope? Can't imagine any reason not to use a much more efficient video codec here other than this running in node.js, potentially missing offscreen canvas/buffer APIs and C encoding libraries that you could use for that.

I would not want to work at this company, if this is how they develop software. Must be horribly rushed prototypical code, everywhere.

replies(2): >>42073690 #>>42074152 #
dmazzoni ◴[] No.42074152[source]
Their business is joining meetings from 7 different platforms (Zoom, Meet, WebEx, etc.) and capturing the video.

They don't have control of the incoming video format.

They don't even have access to the incoming video data, because they're not using an API. They're joining the meeting using a real browser, and capturing the video.

Is it an ugly hack? Maybe. But it's also a pretty robust one, because they're not dependent on an API that might break or reverse-engineering a protocol that might change. They're a bit dependent on the frontend, but that changes rarely and it's super easy to adapt when it does change.

replies(2): >>42074757 #>>42074913 #
1. lostmsu ◴[] No.42074757[source]
Even in this case it is non-sensical. Dunno about Linux, but on Windows you'd just feed the GPU window surface into a GPU hardware encoder via a shared texture with basically 0 data transmission, and get a compressed stream out.