←back to thread

306 points carlos-menezes | 1 comments | | HN request time: 0.205s | source
Show context
jpambrun ◴[] No.41895238[source]
This paper seems to be neglecting to the effect of latency and packet loss. From my understanding, the biggest issue with TCP is the window sizing that gets cut every time a packet gets lost or arrives out of order, thus killing throughput. The latency makes that more likely to happen and makes the effect last longer.

This paper needs multiple latency simulations, some packet loss and latency jitter to have any value.

replies(1): >>41895273 #
1. dgacmu ◴[] No.41895273[source]
This is a bit of a misunderstanding. A single out of order packet will not cause a reduction; tcp uses three duplicate acks as a loss signal. So the packet must have been reordered to arrive after 3 later packets.

Latency does not increase the chances of out of order packet arrival. Out of order packet arrival is usually caused by multipath or the equivalent inside a router if packets are handled by different stream processors (or the equivalent). Most routers and networks are designed to keep packets within a flow together to avoid exactly this problem.

However, it is fair to say that traversing more links and routers probably increases the chance of out of order packet delivery, so there's a correlation in some way with latency, but it's not really about the latency itself - you can get the same thing in a data center network.