←back to thread

257 points pg | 5 comments | | HN request time: 1.043s | source
Show context
rarrrrrr ◴[] No.2121225[source]
Since no one has mentioned it yet - Varnish-cache.org, written by a FreeBSD kernel hacker, has a very nice feature, in that it will put all overlapping concurrent requests for the same cacheable resource "on hold", only fetch that resource once from the backend, then serve the same copy to all. Nearly all the expensive content on HN would be cacheable by varnish. Then you can get it down to pretty close to "1 backend request per content change" and stop worrying about how arbitrarily slow the actual backend server is, how many threads, how you deal with the socket, garbage collection, and all that.
replies(4): >>2121261 #>>2121274 #>>2121319 #>>2122946 #
j_baker ◴[] No.2121274[source]
Can't you only use varnish for mostly non-dynamic content? Like for example, wouldn't the fact that it displays my username and karma score at the top of the page make it so that you couldn't use varnish (or at least make it more difficult)?
replies(2): >>2121326 #>>2121328 #
1. seiji ◴[] No.2121328[source]
Check out http://www.varnish-cache.org/trac/wiki/ESIfeatures

There could be a private internal URL to just return username and karma to populate the user info header.

replies(1): >>2121733 #
2. j_baker ◴[] No.2121733[source]
Doesn't that kind of defeat the purpose though? The point of using varnish is that it keeps you from having to access the backend altogether. This is getting into an area where something like memcache might be more appropriate.
replies(1): >>2121765 #
3. danudey ◴[] No.2121765[source]
Well, the point of using varnish is to keep you from having to access the backend any more than is absolutely necessary. It's incredibly trivial to generate HTML showing a user's username and karma, and even if it weren't it could be stored in memcached. Generating the front page, the comments pages, etc. is the hard part, and varnish can keep that from being generated any more than is necessary.
replies(1): >>2121924 #
4. j_baker ◴[] No.2121924{3}[source]
Of course, but I seem to recall pg writing at some point that one of the goals of HN being to prove that "slow" languages can scale using caching. I assume, therefore, that he already has caching of some kind in place for those things. If varnish isn't going to save an access to the server (which seems to be the primary thing that's slowing things down), what value is varnish providing above what pg already has in place?
replies(1): >>2121932 #
5. danudey ◴[] No.2121932{4}[source]
The requests won't queue up as badly because the server will be able to clean out 'simple' requests in a much lower time than generating much larger pages. They won't queue up as much because the requests take less time to handle, so they can be cleared out faster than they come in (compared to larger requests that queue up faster than they can be handled).