←back to thread

154 points feep | 3 comments | | HN request time: 0.621s | source
Show context
simonw ◴[] No.44464893[source]
I got my start in the CGI era, and it baked into me an extremely strong bias against running short-lived subprocesses for things.

We invented PHP and FastCGI mainly to get away from the performance hit of starting a new process just to handle a web request!

It was only a few years ago that I realized that modern hardware means that it really isn't prohibitively expensive to do that any more - this benchmark gets to 2,000/requests a second, and if you can even get to a few hundred requests a second it's easy enough to scale across multiple instances these days.

I have seen AWS Lambda described as the CGI model reborn and that's a pretty fair analogy.

replies(3): >>44465143 #>>44465227 #>>44465926 #
citrin_ru ◴[] No.44465926[source]
CGI never was prohibitively expensive for low load and for high load a persistent process (e. g. FastCGI) is still better. CGI may be allows to handle 2k rps but FastCGI app doing the same job should handle more. You would need to start an additional server process (and restart it on upgrade) but it's worth to do if performance matters.
replies(1): >>44466710 #
1. cenamus ◴[] No.44466710[source]
I agree, but if you're doing fastcgi, you might as well do http directly, with a relay in front of it (load balancing, tls termination, whatever).
replies(1): >>44470900 #
2. immibis ◴[] No.44470900[source]
CGI-based protocols transfer a bunch of metadata from the front end - such as the client IP address - without any injection or double-parsing vulnerabilities. Using HTTP twice means having more code and a greater security risk.

By the way if you're using nginx, then instead of FastCGI you might prefer SCGI, which does one connection per request and no multiplexing, so it's much simpler.

replies(1): >>44472219 #
3. petee ◴[] No.44472219[source]
I always wished that FastCGI's Filter & Authorizer roles became popular, it's a nice separation of duties