←back to thread

256 points BSDobelix | 9 comments | | HN request time: 0.431s | source | bottom
1. robinhoodexe ◴[] No.42163820[source]
Is tuning the TCP buffer size for instance worth it?
replies(6): >>42163899 #>>42166594 #>>42167580 #>>42167891 #>>42168594 #>>42169545 #
2. viraptor ◴[] No.42163899[source]
It depends. At home - probably not. On a fleet of 2000 machines where you want to keep network utilisation close to 100% with maximal throughput, and the non-optional settings translate to a not-trivial value in $ - yes.
replies(1): >>42164286 #
3. londons_explore ◴[] No.42164286[source]
TCP parameters are a classic example of where an autotuner might bite you in the ass...

Imagine your tuner keeps making the congestion control more aggressive, filling network links up to 99.99% to get more data through...

But then any other users of the network see super high latency and packet loss and fail because the tuner isn't aware of anything it isn't specifically measuring - and it's just been told to make this one application run as fast as possible.

replies(1): >>42167701 #
4. crest ◴[] No.42166594[source]
It depends mostly on the bandwidth-delay-product and packet loss you expect on each connection. A there is a vast difference between a local interactive SSH session and downloading a large VM image from across an ocean.
5. zymhan ◴[] No.42167580[source]
There's not much cost to doing it, so yes.
6. withinboredom ◴[] No.42167701{3}[source]
It literally covers this exact scenario in the readme and explains how it prevents that.
7. toast0 ◴[] No.42167891[source]
In my experience running big servers, tuning TCP buffers is definitely worth it, because different kinds of servers have different needs. It doesn't often work miracles, but tuning buffers is low cost, so the potential for a small positive impact is often worth the time to try.

If your servers communicate at high datarates with a handful of other servers, some of which are far away, but all of which have large potential throughput, you want big buffers. Big buffers allow you to have a large amount of data in flight to remote systems, which lets you maintain throughput regardless of where your servers are. You'd know to look at making buffers bigger if your throughput to far away servers is poor.

If you're providing large numbers of large downloads to public clients that are worldwide from servers in the US only, you probably want smaller buffers. Larger buffers would help with throughput to far away clients, but slow, far away clients will use a lot of buffer space and limit your concurrency. Clients that disappear mid download will tie up buffers until the connection is torn down and it's nice if that's less memory for each instance. You'd know to look at making buffers smaller if you're using more memory than you think is appropriate for network buffers... a prereq is monitoring memory use by type.

If you're serving dynamic web pages, you want your tcp buffers to be at least as big as your largest page, so that your dynamic generation never has to block for a slow client. You'd know to look at this if you see a lot of servers blocked on sending to clients, and/or if you see divergent server measured response times for things that should be consistent. This is one case where getting buffer sizes right can enable miracles; Apache pre-fork+mod_PHP can scale amazingly well or amazingly poorly; it scales well when you can use an accept filter so apache doesn't get a socket until the request is ready to be read, and PHP/apache can send the whole response to the tcp buffer without waiting, then closes the socket; letting the kernel deal with it from there. Keep-alive and TLS make this a bit harder, but the basic idea of having enough room to buffer the whole page still fits.

8. fragmede ◴[] No.42168594[source]
If you're on 10 or 100 gig, it's almost required to get close to line speed performance.
9. arminiusreturns ◴[] No.42169545[source]
Worked with Weka to maximize NFSv4/mellanox throughput, and it was absolutely required to get targets.