←back to thread

257 points pg | 1 comments | | HN request time: 0.202s | source
Show context
rarrrrrr ◴[] No.2121225[source]
Since no one has mentioned it yet - Varnish-cache.org, written by a FreeBSD kernel hacker, has a very nice feature, in that it will put all overlapping concurrent requests for the same cacheable resource "on hold", only fetch that resource once from the backend, then serve the same copy to all. Nearly all the expensive content on HN would be cacheable by varnish. Then you can get it down to pretty close to "1 backend request per content change" and stop worrying about how arbitrarily slow the actual backend server is, how many threads, how you deal with the socket, garbage collection, and all that.
replies(4): >>2121261 #>>2121274 #>>2121319 #>>2122946 #
nuclear_eclipse ◴[] No.2121319[source]
Reverse proxies won't work for HN, because requests for the same resource from multiple users can't use the same results. Not only are certain bits of info customized for the user (like your name/link at the top), but even things like the comments and links are custom per user.

Things like users' showdead value, as well as whether the user is deaded, can drastically change the output of each page. Eg, comments by a deaded user won't show as dead to that user, but they will for everyone else...

replies(5): >>2121335 #>>2121419 #>>2121430 #>>2121544 #>>2122733 #
1. jjoe ◴[] No.2121544[source]
There's cookie-based caching in Varnish (and in some other proxy caches too). Essentially, the key is going to be made of the usual hash + the cookie like this:

sub vcl_hash { set req.hash += req.http.cookie; }

What this means is that the cache is per-logged-in-user and pretty much personalized. The server's going to need a lot more RAM than usual. You can set a low TTL on the cache entries so they're flushed and not kept in memory indefinitely. But the performance boost is great.

This is not recommended as an always-on measure. We wrote an entry about accomplishing something similar w/ python&varnish. Here it is if you're interesting in reading about it: http://blog.unixy.net/2010/11/3-state-throttle-web-server/

Regards