←back to thread

306 points carlos-menezes | 1 comments | | HN request time: 0s | source
Show context
cletus ◴[] No.41891721[source]
At Google, I worked on a pure JS Speedtest. At the time, Ookla was still Flash-based so wouldn't work on Chromebooks. That was a problem for installers to verify an installation. I learned a lot about how TCP (I realize QUIC is UDP) responds to various factors.

I look at this article and consider the result pretty much as expected. Why? Because it pushes the flow control out of the kernel (and possibly network adapters) into userspace. TCP has flow-control and sequencing. QUICK makes you manage that yourself (sort of).

Now there can be good reasons to do that. TCP congestion control is famously out-of-date with modern connection speeds, leading to newer algorithms like BRR [1] but it comes at a cost.

But here's my biggest takeaway from all that and it's something so rarely accounted for in network testing, testing Web applications and so on: latency.

Anyone who lives in Asia or Australia should relate to this. 100ms RTT latency can be devastating. It can take something that is completely responsive to utterly unusable. It slows down the bandwidth a connection can support (because of the windows) and make it less responsive to errors and congestion control efforts (both up and down).

I would strongly urge anyone testing a network or Web application to run tests where they randomly add 100ms to the latency [2].

My point in bringing this up is that the overhead of QUIC may not practically matter because your effective bandwidth over a single TCP connection (or QUICK stream) may be MUCH lower than your actual raw bandwidth. Put another way, 45% extra data may still be a win because managing your own congestion control might give you higher effective speed over between two parties.

[1]: https://atoonk.medium.com/tcp-bbr-exploring-tcp-congestion-c...

[2]: https://bencane.com/simulating-network-latency-for-testing-i...

replies(11): >>41891766 #>>41891768 #>>41891919 #>>41892102 #>>41892118 #>>41892276 #>>41892709 #>>41893658 #>>41893802 #>>41894376 #>>41894468 #
skissane ◴[] No.41891768[source]
> Because it pushes the flow control out of the kernel (and possibly network adapters) into userspace

That’s not an inherent property of the QUIC protocol, it is just an implementation decision - one that was very necessary for QUIC to get off the ground, but now it exists, maybe it should be revisited? There is no technical obstacle to implementing QUIC in the kernel, and if the performance benefits are significant, almost surely someone is going to do it sooner or later.

replies(3): >>41891946 #>>41891973 #>>41893160 #
conradev ◴[] No.41893160[source]
Looks like it’s being worked on: https://lwn.net/Articles/989623/
replies(1): >>41896868 #
1. throawayonthe ◴[] No.41896868[source]
also looks like current quic performance issues are a consideration, tested in section 4. :

> The performance gap between QUIC and kTLS may be attributed to:

  - The absence of Generic Segmentation Offload (GSO) for QUIC.
  - An additional data copy on the transmission (TX) path.
  - Extra encryption required for header protection in QUIC.
  - A longer header length for the stream data in QUIC.