←back to thread

306 points carlos-menezes | 6 comments | | HN request time: 0.403s | source | bottom
Show context
cletus ◴[] No.41891721[source]
At Google, I worked on a pure JS Speedtest. At the time, Ookla was still Flash-based so wouldn't work on Chromebooks. That was a problem for installers to verify an installation. I learned a lot about how TCP (I realize QUIC is UDP) responds to various factors.

I look at this article and consider the result pretty much as expected. Why? Because it pushes the flow control out of the kernel (and possibly network adapters) into userspace. TCP has flow-control and sequencing. QUICK makes you manage that yourself (sort of).

Now there can be good reasons to do that. TCP congestion control is famously out-of-date with modern connection speeds, leading to newer algorithms like BRR [1] but it comes at a cost.

But here's my biggest takeaway from all that and it's something so rarely accounted for in network testing, testing Web applications and so on: latency.

Anyone who lives in Asia or Australia should relate to this. 100ms RTT latency can be devastating. It can take something that is completely responsive to utterly unusable. It slows down the bandwidth a connection can support (because of the windows) and make it less responsive to errors and congestion control efforts (both up and down).

I would strongly urge anyone testing a network or Web application to run tests where they randomly add 100ms to the latency [2].

My point in bringing this up is that the overhead of QUIC may not practically matter because your effective bandwidth over a single TCP connection (or QUICK stream) may be MUCH lower than your actual raw bandwidth. Put another way, 45% extra data may still be a win because managing your own congestion control might give you higher effective speed over between two parties.

[1]: https://atoonk.medium.com/tcp-bbr-exploring-tcp-congestion-c...

[2]: https://bencane.com/simulating-network-latency-for-testing-i...

replies(11): >>41891766 #>>41891768 #>>41891919 #>>41892102 #>>41892118 #>>41892276 #>>41892709 #>>41893658 #>>41893802 #>>41894376 #>>41894468 #
1. reshlo ◴[] No.41892276[source]
> Anyone who lives in Asia or Australia should relate to this. 100ms RTT latency can be devastating.

When I used to (try to) play online games in NZ a few years ago, RTT to US West servers sometimes exceeded 200ms.

replies(2): >>41892498 #>>41893624 #
2. indrora ◴[] No.41892498[source]
When I was younger, I played a lot of cs1.6 and hldm. Living in rural New Mexico, my ping times were often 150-250ms.

DSL kills.

replies(1): >>41893340 #
3. somat ◴[] No.41893340[source]
I used to play netquake(not quakeworld) at up to 800 ms lag, past that was too much for even young stupid me.

For them that don't know the difference. netquake was the original strict client server version of quake, you hit the forward key it sends that to the server and the server then sends back where you moved. quakeworld was the client side prediction enhancement that came later, you hit forward, the client moves you forwards and sends it to the server at the same time. and if there are differences it gets reconciled later.

For the most part client side prediction feels better to play. however when there are network problems, large amounts of lag, a lot of artifacts start to show up, rubberbanding, jumping around, hits that don't. Pure client server feels worse, every thing gets sluggish, and mushy but movement is a little more predictable and logical and can sort of be anticipated.

I have not played quake in 20 years but one thing I remember is at past 800ms of lag the lava felt magnetic, it would just suck you in, every time.

4. albertopv ◴[] No.41893624[source]
I would be surprised if online games use TCP. Anyway, physics is still there and light speed is fast, but that much. In 10ms it travels about 3000km, NZ to US west coast is about 11000km, so less than 60ms is impossible. Cables are probably much longer, c speed is lower in a medium, add network devices latency and 200ms from NZ to USA is not that bad.
replies(2): >>41894331 #>>41901615 #
5. Hikikomori ◴[] No.41894331[source]
Speed of light in fiber is about 200 000km/s. Most of the latency is because of distance, modern routers have a forwarding latency of tens of microseconds, some switches can start sending out a packet before fully receiving it.
6. reshlo ◴[] No.41901615[source]
The total length of the relevant sections of the Southern Cross Cable is 12,135km, as it goes via Hawaii.

The main reason I made my original comment was to point out that the real numbers are more than double what the other commenter called “devastating” latency.

https://en.wikipedia.org/wiki/Southern_Cross_Cable