←back to thread

362 points tosh | 7 comments | | HN request time: 0.439s | source | bottom
Show context
trollied ◴[] No.42069524[source]
>In a typical TCP/IP network connected via ethernet, the standard MTU (Maximum Transmission Unit) is 1500 bytes, resulting in a TCP MSS (Maximum Segment Size) of 1448 bytes. This is much smaller than our 3MB+ raw video frames.

> Even the theoretical maximum size of a TCP/IP packet, 64k, is much smaller than the data we need to send, so there's no way for us to use TCP/IP without suffering from fragmentation.

Just highlights that they do not have enough technical knowledge in house. Should spend the $1m/year saving on hiring some good devs.

replies(5): >>42069956 #>>42070181 #>>42070248 #>>42070804 #>>42070811 #
1. hathawsh ◴[] No.42070248[source]
Why do you say that? Their solution of using shared memory (structured as a ring buffer) sounds perfect for their use case. Bonus points for using Rust to do it. How would you do it?

Edit: I guess perhaps you're saying that they don't know all the networking configuration knobs they could exercise, and that's probably true. However, they landed on a more optimal solution that avoided networking altogether, so they no longer had any need to research network configuration. I'd say they made the right choice.

replies(2): >>42070439 #>>42076768 #
2. maxmcd ◴[] No.42070439[source]
Yes, maybe they're talking about this: https://en.wikipedia.org/wiki/TCP_window_scale_option
3. kikimora ◴[] No.42076768[source]
> Why do you say that?

This is because reading how they came up with the solution it is clear they have little understanding how low level stuff works. For example, they surprised by the amount of data, that TCP packets are not the same as application level packets or frames, etc.

As for ring buffer design I’m not sure I understand their solution. Article mentions media encoder runs in a separate process. Chromium threads live in their processes (afaik) as well. But ring buffer requirement says “lock free” which only make sense inside a single process.

replies(2): >>42086021 #>>42095669 #
4. rstuart4133 ◴[] No.42086021[source]
> But ring buffer requirement says “lock free” which only make sense inside a single process.

No, "lock free" is a thing that's nice to have when you've got two threads accessing the same memory. It doesn't matter if those two threads are in the same process or it's two different processes accessing the same memory. It's almost certainly two different processes in this case, and the shared memory is probably memory mapped file.

Whatever it is, the shared memory approach is going to be much faster using the kernel to ship the data between the two processes. Via the kernel means two copies, and probably two syscalls as well.

replies(1): >>42163399 #
5. evoke4908 ◴[] No.42095669[source]
"Lock-free" does not in any way imply a single process. Quite the opposite. We don't call single-thread code lock-free because all single-thread code is lock free by definition. You kind of can't use locks at all in this context, so it makes no sense to describe it as lock-free. This is like gluten-free water, complete nonsense.

Lock-free code is designed for concurrent access, but using some clever tricks to handle synchronization between processes without actually invoking a lock. Lock-free explicitly means parallel.

replies(1): >>42163373 #
6. kikimora ◴[] No.42163373{3}[source]
I’m talking about single process with multiple threads where lock free makes sense.
7. kikimora ◴[] No.42163399{3}[source]
I understand you can setup a data structure in shared memory and use lock free instructions to access it. However, I have never seen this is done in practice due to complexity. One particularly complicated scenario that comes to mind is dealing with unexpected process failures. This is quite different to dealing with exceptions in a thread.