One shared JVM for maximum performance!
It can also share db connection pools, caches, etc. among those applications!
Wow!
One shared JVM for maximum performance!
It can also share db connection pools, caches, etc. among those applications!
Wow!
Because a lot of production software is half-baked. If you have to hand over an application to an operations team you need documentation, instrumentation, useful logging, error handling and a ton of other things. Instead software is now stuffed into containers that never receive security updates, because containers make things secure apparently. Then the developers can just dump whatever works into a container and hide the details.
To be fair most of that software is also way more complex today. There are a ton of dependencies and integrations and keeping track of them is a lot of work.
I did work with an old school C programmer that complained that a system we deployed was a ~2GB war file, running on Tomcat and requiring at least 8GB of memory and still crashed constantly. He had on multiple occasions offered to rewrite the how thing in C, which he figured would be <1MB and requiring at most 50MB of RAM to run. Sadly the customer never agreed, I would have loved to see if it had worked out as he predicted.
But imagine having to host 50 small applications each serving a couple of hundreds requests per day. In that case, the memory overhead of Tomcat with 50 war files is much bigger than a simple Apache/Nginx server with a CGI script.
Not saying that can't happen with CGI, but since Tomcat is a shared environment, it's much more susceptible to it.
This is why shared, public Tomcat hosting never became popular compared to shared CGI hosting. A rogue CGI program can be managed by the host accounting subsystem (say, it runs too long, takes up too much memory, etc.), plus all of the other guards that can be put on processes.
The efficiency of CGI, specifically for compiled executables, is that the code segments are shared in virtual memory, so forking a new one can be quite cheap. While forking a new Perl or PHP process shares that, they still need to repeatedly go through the parsing phase.
The middle ground of "p-code" can work well, as those files are also shared in the buffer cache. The underlying runtime can map the p-code files into the process, and those are shared across instances also.
So, the fork startup time, while certainly not zero, can be quite efficient.
I develop and deploy Flask + Rails + Django apps regularly and the deploy process is the same few Docker Compose commands. All of the images are stored the same with only tiny differences in the Dockerfile itself.
It has been a tried and proven model for ~10 years. The core fundamentals have held up, there's new features but when I look at Dockerfiles I've written in 2015 vs today you can still see a lot of common ideas.
Not to mention endless frustration any upgrades would cause since we had to get all teams onboard with "Hey, we are upgrading PHP 5, you ready?" and there was always that abandoned app that couldn't be shut down because $BusinessReasons.
Containers have greatly helped with those frustration points and languages self-hosting HTTP have really made stuff vastly better for us Ops folks.
We'd use this approach not just for webapps, but versions of applications we'd build in house, bundles of scripts, whatever.