←back to thread

360 points pjf | 1 comments | | HN request time: 0.202s | source
Show context
notacoward ◴[] No.14299101[source]
It's almost irresponsible to write an article on this topic in 2017 without explicitly mentioning bufferbloat or network-scheduling algorithms like CoDel designed to address it. If you really want to understand this article, read up on those first.

https://en.wikipedia.org/wiki/CoDel

replies(5): >>14299422 #>>14299475 #>>14299613 #>>14301482 #>>14304236 #
1. drewg123 ◴[] No.14299475[source]
One of the nice things about BBR is that it tries to avoid buffer bloat by measuring buffer bloat at the bottleneck link and avoiding contributing to it. I'm not a protocol expert, but our protocol team has done an implementation of BBR, and I've been in plenty of meetings where I've heard it described. Let me take a crack at trying to explain:

As I understand it, BBR goes through periodic bandwidth probing cycles. When it ramps up and sends at higher bandwidth, it sees if the RTT increases without a corresponding increase in bandwidth or packet loss. If so, it assumes that a queue is building at the bottleneck, and ramps down to a rate below the expected max bandwidth on the flow, thereby draining the queue. When the RTT has bottomed out, it ramps back up to the expected bandwidth on the flow. This keeps queues small.

The bad thing about BBR is that it is much more expensive than other commonly used TCP congestion control algorithms due to the periodic ramp up/down. It also does not lend itself to hardware packet pacing either. Early versions of our implementation are considerably more expensive than the default congestion control. Eg, a server that is ~50% idle at 90Gb/s will be CPU maxed at less than 90Gb/s with BBR. But this is improving daily.