←back to thread

362 points tosh | 1 comments | | HN request time: 0s | source
Show context
h4ck_th3_pl4n3t ◴[] No.42073457[source]
The problem is not on network level.

The problem is that the developers behind this way of streaming video data seem to have no idea of how video codecs work.

If they are in control of the headless chromium instances, the video streams, and the receiving backend of that video stream...why not simply use RDP or a similar video streaming protocol that is made exactly for this purpose?

This whole post reads like an article from a web dev that is totally over their head, trying to implement something that they didn't take the time to even think about. Arguing with TCP fragmentation when that is not even an issue, and trying to use a TCP stream when that is literally the worst thing you can do in that situation because of roundtrip costs.

But I guess that there is no JS API for that, so it's outside the development scope? Can't imagine any reason not to use a much more efficient video codec here other than this running in node.js, potentially missing offscreen canvas/buffer APIs and C encoding libraries that you could use for that.

I would not want to work at this company, if this is how they develop software. Must be horribly rushed prototypical code, everywhere.

replies(2): >>42073690 #>>42074152 #
1. doctorpangloss ◴[] No.42073690[source]
It’s alright.

It is difficult to say, I’ve never used the product. They don’t describe what it is they are trying to do.

If you want to pipe a Zoom call to a Python process it’s complicated.

Everything else that uses WebRTC, I suppose Python should generate the candidates, and the fake browser client hands over the Python process’s candidates instead of its own. It could use the most basic bindings to libwebrtc.

If the bulk of their app is JavaScript, they ought to inject a web worker and use encoded transforms.

But I don’t know though.