A big project not handling HTTPS themselves (like docmost), adds tons of complexity on the server side. Now, I have to install that service as a container to isolate that, then need to add a reverse proxy on top, etc.
That leads to resource inflation when I just want to use a small VM for that single task. Now, instead I deploy a whole infrastructure to run that small thing.
99% of them use system-wide PKI store for CA and their certificates, which is under /etc. All of them have configuration options for these folders, and have short certificate lives because of the operational regulations we have in place.
At worst case, we distribute them with saltstack. Otherwise we just use scp, maybe a small wrapper script. Managing them doesn't take any measurable time for us.
...and we have our own CA on top of it.
So even for two apps from the same vendor, which are commonly deployed together, you need bespoke TLS file variants. Scale that to more applications, and you’ll find out the hard way that you are vastly underestimating the complexity of operating a software ecosystem.
Consider yourself lucky then.
For self-hosted third party software, I've seen requirements to provide it in an environment variable, upload it to a web form over plain http on localhost, specify an AWS secret service secret that contains it, put it in a specific location that is bind mounted into a container, create a custom image (both VM and container), etc.