←back to thread

257 points pg | 1 comments | | HN request time: 0.857s | source
Show context
rarrrrrr ◴[] No.2121225[source]
Since no one has mentioned it yet - Varnish-cache.org, written by a FreeBSD kernel hacker, has a very nice feature, in that it will put all overlapping concurrent requests for the same cacheable resource "on hold", only fetch that resource once from the backend, then serve the same copy to all. Nearly all the expensive content on HN would be cacheable by varnish. Then you can get it down to pretty close to "1 backend request per content change" and stop worrying about how arbitrarily slow the actual backend server is, how many threads, how you deal with the socket, garbage collection, and all that.
replies(4): >>2121261 #>>2121274 #>>2121319 #>>2122946 #
dauphin ◴[] No.2121261[source]
You clearly don't understand the problem. Even mod_pagespeed or memcached would be more appropriate here: They are rate-limited by the LISP kernel anyway (we are talking about dynamic content here).
replies(1): >>2121593 #
1. rarrrrrr ◴[] No.2121593[source]
Varnish sits in front of the backend, responding to all requests that it has cached content for directly, without bothering the backend at all. It lives higher in the stack than than pagespeed or memcache, and not limited by the backend's speed in any way.