If you find yourself with a scripted language where processing HTTP requests might be too slow or unsafe, I can still see some utility for FastCGI. For most of the rest of us, HTTP won, just write little HTTP webservers.
Without serious development effort, I would expect using an existing web server with FastCGI is faster than writing your own web server. It is also more secure as the FastCGI application can be run as a different user in a chroot, or a namespace based sandbox like a docker container.You would think the separation argument would still hold true for compiled languages, even if performance is no longer as relevant.
Was FastCGI a child of a world where we had neither good library use ("import net/http"), nor (much) layering in front of the server (balancers / cdns / cloudflare etc). So it made sense to assume a production-level layer on the box itself was always needed?
I remember the vigorous discussions comparing security of Apache vs IIS etc
Maybe today with all the complexity that is with http2/http3 etc it would make slightly more sense. But FastCGI was popular when all the world was just good old plain http1.
If you're currently using a reverse proxy, did you remember to make sure that your proxy always deletes X-Forwarded-For from the client, always adds its own, OR that the backend always ignores it? And you have to do this for each piece of metadata you expect to come from the proxy. With FastCGI this is not needed.
I chose SCGI instead of FastCGI, though, since nginx doesn't support multiplexing and I don't use large request bodies. SCGI not supporting multiplexing makes it much simpler to write a back end. You just accept a new socket and fork for each request.
By the way, FastCGI wasn't designed as "binary HTTP" as implied by some sibling comments, but rather "CGI over a socket". It passes the environment variables the CGI program would have had, and multiplexes its stdin, stdout and even stderr. SCGI is the same but without multiplexing or stderr.
Author complains about having to use a reverse proxy at all, which is fine for prototyping, but I have about 5 domains pointed at the same server, and multiple apps on some domains, so why wouldn't I use a reverse proxy to route those requests? And yes, I run the same nginx reverse proxy on my development machine for testing.
But I suspect it's more that CGI was the way things had always been done. They didn't even consider doing a reverse proxy. They asked the question "how do we make CGI faster" and so ended up with FastCGI.
Other developers asked the same question and ended up making mod_php (and friends), embedding the scripting language directly into the web server.
Not sure I've ever seen Filter in real life.
[1]: https://deadlime.hu/en/2023/11/24/technologies-left-behind/
To give some examples what I mean:
- Apache can be a FastCGI server (mod_fcgid) and proxy (mod_proxy_fcgi) - Nginx (and most other Webservers I checked) is only a FastCGI proxy
Wikipedia [1] lists both as "Web servers that implement FastCGI". For me it took some time to recognize the difference whether a server speaks the FastCGI protocol or can also host/spawn FastCgi supporting applications which do not daemonize by themself.
For the (probably) most used FastCGI application PHP this is easy because php-fpm is both a FastCGI server and application.
An example for a (Fast)CGI application which is not also a server is MapServer [2] which is listed by Wikipedia as CGI program and by its own documentation with "FastCGI support" or "FastCGI-enabled".
The fact that it is only a FastCGI application and needs an additional FastCGI server to host it is (in my opinion) not clearly communicated.
A common tutorial to combine MapServer with Nginx is to use fcgiwrap or spawn-fcgi as FastCGI server.
Since a FastCGI application can usually also act as CGI application it is easy to miss that the first option is running the application as CGI and only the second as FastCGI where one instance serves more than one request.
For me the main difference is that a FastCGI server will open a socket and can spawn worker processes (which php-fpm does and MapServer does not) and a FastCGI application (the worker process) will accept and handle connections/requests.
A FastCGI proxy translates requests (usually from HTTP to FastCGI protocol) and redirects them to a FastCGI server. It does not spawn processes like Apaches mod_fcgid does.
Due to the simplicity it should be even possible to use systemd as simple FastCGI server. [3] (I have never tried it)
My recommendation if you ever come across an FastCGI application which cannot open a socket / daemonize by itself use Apache mod_fcgid. Even if you already use another Webserver to handle HTTP. From all options I checked it has by far the most useful options to dynamically limit the number of worker processes and respawn them from time to time.
[1] https://en.wikipedia.org/wiki/FastCGI [2] https://en.wikipedia.org/wiki/MapServer [3] https://nileshgr.com/2016/07/09/systemd-fastcgi-multiple-pro...
That is in line with what the article is saying. Thanks for clarifying.
The script would write new html files for new posts and do "fun" (I mean, terrifying) string manipulation on the main index to insert links to posts etc. Sometimes they used comments with metadata to help "parse" pages which would see edits.
These both were, and definitely were not, "the days" :D
It's not only just about separation of concerns, but also separation of crashes/bugs/issues. FastCGI servers can run for years without restarts.
Thread creation/teardown/sleeping has gotten a lot faster as well in linux.
https://php-fpm.org/about/ it may be old but PHP-FPM is still one of the best FastCGI servers from a pragmatic point of view... ex: the ability to gracefully hot reload code, stop and start workers without losing any queries... all in production.
$ mkdir cgi-bin $ echo -e "#!/bin/bash\necho -e \"Content-type: text/html\n"curl -s -k http://www.zoobab.com -o -" > cgi-bin/proxy.sh $ python -m http.server --cgi 8000 $ curl http://localhost:8000/cgi-bin/proxy.sh
You should get the html page of http://www.zoobab.com
In some sense, implementing a full webserver just because the world standardized on a horribly inefficient way of handling CGI is pretty silly.
We don’t rewrite “cp” and such just because we want to copy multiple files quickly…
Of course, if the sole purpose is just to handle CGI types of things, then the custom embedded webserver likely makes more sense. Apache is a horribly complex beast.
> cgi — Common Gateway Interface support Deprecated since version 3.11, removed in version 3.13.
Wherever we needed faster interaction, FastCGI did the job and allowed us to interface with anything in the backend including our C programs.
Of these, if you get to pick one and the request isn't for a static file, SCGI is the obvious best choice.
You can also load extra plugin modules into nginx itself, of course, including one that puts a Lua interpreter inside nginx (mod_php-style).
CGI makes sense, it fills a specific environment, one request, one process, but fastCGI? why? its a server with a different incompatible HTTP. did everyone jump on it just because it had CGI in the name and the associativity CGI == web process was too firmly entrenched in peoples minds.
But I am not really in a web ops role, so who knows, perhaps fastcgi does have some sort of advantage when run at scale.
My second biggest customer went out of business. Which left us with a bunch of itty-bitty businesses that I'm just too tired to be chasing after.
And, yeah, so much ... cruft now, and I rarely hear reasons for the tradeoff, other than "should" and "It will be great."
And if it is so great, why did it go away?
Simiarly, i either hear no discussion about tradeoffs, or they tend to be so vauge or weak...and it eventually boils down to "Ain't no body got time to figure out the 'right way', and this new $thing is popular, so this new way is the right way everyone...".
As far as why it went away, i frankly don;'t know...but i imagine for the same reasons other good stuff goes away: maybe contributors stopped working on it, or there is money to be made in the new ways, or the zeitgeist of the new way (due to management consultants whispering in the ear of IT senior leaders) takes over the attention of the everyday devs, etc. Then again, i suppose if one were to preserve certain local infra. in a way to use old stuff, i guess one may still be able to run things the old way...but that's an uphill battle, and likely not worth it.
The parser ended up being 30 lines of code.
Signed, almost dead battery