Most active commenters

    ←back to thread

    154 points feep | 18 comments | | HN request time: 0.665s | source | bottom
    1. Traubenfuchs ◴[] No.44464611[source]
    Try an apache tomcat 11 next. You can just dump .jsp files or whole java servlet applications as .war file via ssh and it will just work!

    One shared JVM for maximum performance!

    It can also share db connection pools, caches, etc. among those applications!

    Wow!

    replies(4): >>44464663 #>>44464883 #>>44470934 #>>44477758 #
    2. atemerev ◴[] No.44464663[source]
    I miss this so much. Deployment should be just copying the file (over ssh or whatever). Why people overcomplicated it so much?
    replies(5): >>44464765 #>>44464831 #>>44465743 #>>44466836 #>>44467142 #
    3. rexreed ◴[] No.44464765[source]
    PHP can work the same way. Push / FTP / SFTP PHP file to directory, deployed.
    replies(1): >>44465231 #
    4. mrweasel ◴[] No.44464831[source]
    > Why people overcomplicated it so much?

    Because a lot of production software is half-baked. If you have to hand over an application to an operations team you need documentation, instrumentation, useful logging, error handling and a ton of other things. Instead software is now stuffed into containers that never receive security updates, because containers make things secure apparently. Then the developers can just dump whatever works into a container and hide the details.

    To be fair most of that software is also way more complex today. There are a ton of dependencies and integrations and keeping track of them is a lot of work.

    I did work with an old school C programmer that complained that a system we deployed was a ~2GB war file, running on Tomcat and requiring at least 8GB of memory and still crashed constantly. He had on multiple occasions offered to rewrite the how thing in C, which he figured would be <1MB and requiring at most 50MB of RAM to run. Sadly the customer never agreed, I would have loved to see if it had worked out as he predicted.

    5. miroljub ◴[] No.44464883[source]
    It depends on the application usage pattern. For heavily used applications, sure, it's an excellent choice.

    But imagine having to host 50 small applications each serving a couple of hundreds requests per day. In that case, the memory overhead of Tomcat with 50 war files is much bigger than a simple Apache/Nginx server with a CGI script.

    replies(1): >>44465386 #
    6. Twirrim ◴[] No.44465231{3}[source]
    We used to use symlinks to enable atomic operations, too. e.g. under /var/www/ we'd have /var/www/webapp_1.0, and have a symlink /var/www/webapp pointing to it. When there was a new version, upload it to /var/www/webapp_1.1, and then to bring it live, just update the symlink. Need to roll back? Switch the symlink back.
    replies(1): >>44466037 #
    7. whartung ◴[] No.44465386[source]
    The other issue with Tomcat is that a single bad actor can more easily compromise the server.

    Not saying that can't happen with CGI, but since Tomcat is a shared environment, it's much more susceptible to it.

    This is why shared, public Tomcat hosting never became popular compared to shared CGI hosting. A rogue CGI program can be managed by the host accounting subsystem (say, it runs too long, takes up too much memory, etc.), plus all of the other guards that can be put on processes.

    The efficiency of CGI, specifically for compiled executables, is that the code segments are shared in virtual memory, so forking a new one can be quite cheap. While forking a new Perl or PHP process shares that, they still need to repeatedly go through the parsing phase.

    The middle ground of "p-code" can work well, as those files are also shared in the buffer cache. The underlying runtime can map the p-code files into the process, and those are shared across instances also.

    So, the fork startup time, while certainly not zero, can be quite efficient.

    replies(2): >>44467172 #>>44477687 #
    8. nickjj ◴[] No.44465743[source]
    Docker helps with this nowadays. Of course you need to understand setting things up the first time you do it but once you know, it can apply to any tech stack.

    I develop and deploy Flask + Rails + Django apps regularly and the deploy process is the same few Docker Compose commands. All of the images are stored the same with only tiny differences in the Dockerfile itself.

    It has been a tried and proven model for ~10 years. The core fundamentals have held up, there's new features but when I look at Dockerfiles I've written in 2015 vs today you can still see a lot of common ideas.

    replies(1): >>44466372 #
    9. trinix912 ◴[] No.44466037{4}[source]
    Wouldn't that cause problems when someone would find the old version and corrupt the data with it? Or would only the current version be accessible from the outside?
    replies(2): >>44466795 #>>44468182 #
    10. atemerev ◴[] No.44466372{3}[source]
    Docker makes things opaque. You deploy black boxes and have no idea how the components there operate. Which is fine for devops, but as a software engineer, I prefer to work without Docker (and having to use Docker to install something on a local machine is an abomination, of course).
    11. indigodaddy ◴[] No.44466795{5}[source]
    How would an external user find the old version?
    12. stackskipton ◴[] No.44466836[source]
    Ops here, I mean you still can if you use something like Golang or Java/.Net self-contained. However, the days of "Just transfer over PHP files" ignore the massive setup that Ops had to do to get web server into state where those files could just be transferred over and care/feeding required to keep the web server in that state.

    Not to mention endless frustration any upgrades would cause since we had to get all teams onboard with "Hey, we are upgrading PHP 5, you ready?" and there was always that abandoned app that couldn't be shut down because $BusinessReasons.

    Containers have greatly helped with those frustration points and languages self-hosting HTTP have really made stuff vastly better for us Ops folks.

    13. mrkeen ◴[] No.44467142[source]
    Perhaps. Over SSH? With a password or with a key? Do all employees share the same private key or do keys need to get added and removed when employees come and go. Is there one server or three (Are all deployment instructions done manually in triplicate?). When tomcat itself is upgraded, do you just eat the downtime? What about the system package upgrades or the OS? Which file should be copied over - whatever a particular Dev feels is the latest?
    14. grandiego ◴[] No.44467172{3}[source]
    I believe even today there's no way to control/isolate memory leaks on a per-war basis.
    15. Twirrim ◴[] No.44468182{5}[source]
    Your apache/whatever config would be pointed to the symlink location. No one would be able to get at the old versions of the site.

    We'd use this approach not just for webapps, but versions of applications we'd build in house, bundles of scripts, whatever.

    16. immibis ◴[] No.44470934[source]
    Tomcat/Jakarta EE/JSP is a surprisingly solid stack. I only tried it once. Everything mostly just worked, and worked pretty well. You get to write pages PHP-style (interspersed HTML and code) but with the full power of Java instead of a hack language like PHP. Of course that paradigm may not suit everyone, but you also don't have to handle requests that way as you can also install pure Java routes. It supports websockets. You can share data between requests since it's a single-process multi-threaded model, so you can write something with real-time communication. You can also not do that; JSP code (and of course local variables) is scoped to a request. Deployment is very easy: drop the new webapp (a single file) in the webapps directory, by any method you like e.g. scp, and when Tomcat notices the new file is there, it transparently loads the new app and unloads the old one. You do have to watch out for classloader leaks that would prevent the old app being garbage-collected, though - downside of a single-process model.
    17. lenkite ◴[] No.44477687{3}[source]
    Well, James Gosling was working on the Java Isolates spec, but then Sun experienced financial difficulties and most of the future-thinking JSR (java specification request) work got frozen. Oracle had different priorities after acquisition - moving away from big, fat, enterprise app servers was a big no-no.
    18. palmfacehn ◴[] No.44477758[source]
    I'm still happily using Jetty for webapp backends.