←back to thread

306 points carlos-menezes | 1 comments | | HN request time: 0.198s | source
Show context
cletus ◴[] No.41891721[source]
At Google, I worked on a pure JS Speedtest. At the time, Ookla was still Flash-based so wouldn't work on Chromebooks. That was a problem for installers to verify an installation. I learned a lot about how TCP (I realize QUIC is UDP) responds to various factors.

I look at this article and consider the result pretty much as expected. Why? Because it pushes the flow control out of the kernel (and possibly network adapters) into userspace. TCP has flow-control and sequencing. QUICK makes you manage that yourself (sort of).

Now there can be good reasons to do that. TCP congestion control is famously out-of-date with modern connection speeds, leading to newer algorithms like BRR [1] but it comes at a cost.

But here's my biggest takeaway from all that and it's something so rarely accounted for in network testing, testing Web applications and so on: latency.

Anyone who lives in Asia or Australia should relate to this. 100ms RTT latency can be devastating. It can take something that is completely responsive to utterly unusable. It slows down the bandwidth a connection can support (because of the windows) and make it less responsive to errors and congestion control efforts (both up and down).

I would strongly urge anyone testing a network or Web application to run tests where they randomly add 100ms to the latency [2].

My point in bringing this up is that the overhead of QUIC may not practically matter because your effective bandwidth over a single TCP connection (or QUICK stream) may be MUCH lower than your actual raw bandwidth. Put another way, 45% extra data may still be a win because managing your own congestion control might give you higher effective speed over between two parties.

[1]: https://atoonk.medium.com/tcp-bbr-exploring-tcp-congestion-c...

[2]: https://bencane.com/simulating-network-latency-for-testing-i...

replies(11): >>41891766 #>>41891768 #>>41891919 #>>41892102 #>>41892118 #>>41892276 #>>41892709 #>>41893658 #>>41893802 #>>41894376 #>>41894468 #
klabb3 ◴[] No.41892102[source]
I did a bunch of real world testing of my file transfer app[1]. Went in with the expectation that Quic would be amazing. Came out frustrated for many reasons and switched back to TCP. It’s obvious in hindsight, but with TCP you say “hey kernel send this giant buffer please” whereas UDP is packet switched! So even pushing zeroes has a massive CPU cost on most OSs and consumer hardware, from all the mode switches. Yes, there are ways around it but no they’re not easy nor ready in my experience. Plus it limits your choice of languages/libraries/platforms.

(Fun bonus story: I noticed significant drops in throughput when using battery on a MacBook. Something to do with the efficiency cores I assume.)

Secondly, quic does congestion control poorly (I was using quic-go so mileage may vary). No tuning really helped, and TCP streams would take more bandwidth if both were present.

Third, the APIs are weird man. So, quic itself has multiple streams, which makes it non-drop in replacement with TCP. However, the idea is to have HTTP/3 be drop-in replaceable at a higher level (which I can’t speak to because I didn’t do). But worth keeping in mind if you’re working on the stream level.

In conclusion I came out pretty much defeated but also with a newfound respect for all the optimizations and resilience of our old friend tcp. It’s really an amazing piece of tech. And it’s just there, for free, always provided by the OS. Even some of the main issues with tcp are not design faults but conservative/legacy defaults (buffer limits on Linux, Nagle, etc). I really just wish we could improve it instead of reinventing the wheel..

[1]: https://payload.app/

replies(2): >>41892805 #>>41893050 #
eptcyka ◴[] No.41893050[source]
One does not need to send and should not send one packet per syscall.
replies(3): >>41894327 #>>41894736 #>>41895201 #
1. jacobgorm ◴[] No.41894327[source]
On platforms like macOS that don’t have UDP packet pacing you more or less have to.