←back to thread

362 points tosh | 4 comments | | HN request time: 1.941s | source
Show context
apitman ◴[] No.42069973[source]
I've been toying around with a design for a real-time chat protocol, and was recently in a debate of WebSockets vs HTTP long polling. This should give me some nice ammunition.
replies(1): >>42070064 #
1. pavlov ◴[] No.42070064[source]
No, this story is about interprocess communication on a single computer, it has practically nothing to do with WebSockets vs something else over an IP network.
replies(1): >>42073578 #
2. apitman ◴[] No.42073578[source]
Why do they claim their profile data showed that WebSocket fragmentation and masking were the hot spots?
replies(1): >>42074836 #
3. pavlov ◴[] No.42074836[source]
Because they were sending so much data to another process over the Websocket.

An uncompressed 1920*1080 30fps RGB stream is 178 megabytes / second. (This is 99% likely what they were capturing from the headless browser, although maybe at a lower frame rate - you don’t need full 30 for a meeting capture.)

In comparison, a standard Netflix HD stream is around 1.5 megabits / s, so 0.19 megabytes.

The uncompressed stream is almost a thousand times larger. At that rate, the Websocket overhead starts having an impact.

replies(1): >>42080395 #
4. apitman ◴[] No.42080395{3}[source]
It should still have the same impact at scale, right? ie if I had a server handling enough WebSocket connections to be at 90% CPU usage, switching to a protocol with lower overhead should reduce the usage and thus save me money. This is of course assuming the system isn't io bound.