←back to thread

495 points guntars | 1 comments | | HN request time: 0.24s | source
Show context
bmcahren ◴[] No.44981313[source]
This was a good read and great work. Can't wait to see the performance tests.

Your write up connected some early knowledge from when I was 11 where I was trying to set up a database/backend and was finding lots of cgi-bin online. I realize now those were spinning up new processes with each request https://en.wikipedia.org/wiki/Common_Gateway_Interface

I remember when sendfile became available for my large gaming forum with dozens of TB of demo downloads. That alone was huge for concurrency.

I thought I had swore off this type of engineering but between this, the Netflix case of extra 40ms and the GTA 5 70% load time reduction maybe there is a lot more impactful work to be done.

https://netflixtechblog.com/life-of-a-netflix-partner-engine...

https://nee.lv/2021/02/28/How-I-cut-GTA-Online-loading-times...

replies(2): >>44981421 #>>44989337 #
kev009 ◴[] No.44981421[source]
It wasn't just CGI, every HTTP session was commonly a forked copy of the entire server in the CERN and Apache lineage! Apache gradually had better answers, but their API with common addons made it a bit difficult to transition so webservers like nginx took off which are built closer to the architecture in the article with event driven I/O from the beginning.
replies(3): >>44982088 #>>44983055 #>>44985024 #
jabl ◴[] No.44983055[source]
To nitpick at least as of Apache HTTPD 1.3 ages ago it wasn't forking for every request, but had a pool of already forked worker processes with each handling one connection at a time but could handle an unlimited number of connections sequentially, and it could spawn or kill worker processes depending on load.

The same model is possible in Apache httpd 2.x with the "prefork" mpm.

replies(1): >>44989797 #
1. kev009 ◴[] No.44989797[source]
I don't see anything in my comment that implied _when_ the forking happened so it's not really a nit :)