←back to thread

257 points pg | 10 comments | | HN request time: 0.415s | source | bottom
1. earle ◴[] No.2120831[source]
HN only supports 20 req per second???
replies(3): >>2120860 #>>2120867 #>>2121955 #
2. PStamatiou ◴[] No.2120860[source]
flat files, no database
replies(1): >>2120885 #
3. imp ◴[] No.2120867[source]
Maybe that's just for dynamic pages. Probably most of the most popular pages are cached and wouldn't be considered in that 20 req/sec. Just my own wild guess though.
4. pg ◴[] No.2120885[source]
That's not the bottleneck. Essentially there's an in-memory database (known as hash tables). Stuff is lazily loaded off disk into memory, but most of the frequently needed stuff is loaded once at startup.

The bottleneck is the amount of garbage created by generating pages. IIRC there is some horrible inefficiency involving UTF-8 characters.

replies(3): >>2120907 #>>2120945 #>>2121444 #
5. ezalor ◴[] No.2120907{3}[source]
> That's not the bottleneck.

What is the main bottleneck of HN?

replies(1): >>2120915 #
6. pg ◴[] No.2120915{4}[source]
See the second paragraph.
7. svlla ◴[] No.2120945{3}[source]
perhaps continuations could be used more judiciously as well
8. gills ◴[] No.2121444{3}[source]
Are you using any sort of in-memory fragment caching? That seems like it might reduce some render overhead.
replies(1): >>2121617 #
9. pg ◴[] No.2121617{4}[source]
A great deal, and it does.
10. dauphin ◴[] No.2121955[source]
That's 0.05 second for a single request: actually pretty good.