←back to thread

466 points blacktechnology | 1 comments | | HN request time: 0s | source
Show context
danpalmer ◴[] No.41834089[source]
Reading the deployment information, there's an interesting tension here with applications that target self-hosting.

Deploying this requires running 5 different open source servers (databases, proxies, etc), and 5 different services that form part of this suite. If I were self-hosting this in a company I now need to be an expert in lots of different systems and potentially how to scale them, back them up, etc. The trade-offs to be made here are very different to when architecting a typical SaaS backend, where this sort of architecture might be fine.

I've been going through this myself with a hobby project. I'm designing it for self-hosting, and it's a radically different way of working to what I'm used to (operating services just for my company). I've been using SQLite and local disk storage so that there's essentially just 2 components to operate and scale – application replicas, and shared disk storage (which is easy to backup too). I'd rather be using Postgres, I'd rather be using numerous other services, background queue processors, etc, but each of those components is something that my users would need to understand, and therefore something to be minimised far more strictly than if it were just me/one team.

Huly looks like a great product, but I'm not sure I'd want to self-host.

replies(28): >>41834100 #>>41834175 #>>41834204 #>>41834282 #>>41834308 #>>41834334 #>>41834356 #>>41834450 #>>41834538 #>>41834603 #>>41834672 #>>41834792 #>>41834861 #>>41834865 #>>41834973 #>>41835133 #>>41835222 #>>41835339 #>>41835929 #>>41835949 #>>41836134 #>>41836856 #>>41836958 #>>41838118 #>>41839489 #>>41840080 #>>41876861 #>>41905212 #
colordrops ◴[] No.41834204[source]
Super important point. I work for a very large famous company and deployed an open source project with a bit of customization which became one of the most used internal apps at the company. It was mainly front end code. It gained a lot of traction on GitHub, and the developer decided to create 2.0, which ended up having dependencies on things like supabase. I did all I could to try to deploy supabase internally but it was just too much of an impedence mismatch with our systems, so we ended up punting and going with another provider. If they just went with raw Postgres it would have been fine as we already have a Postgres provider internally, but I wasn't willing to commit to being the maintainer for a supabase and its many moving parts as a frontend engineer.
replies(1): >>41834394 #
danpalmer ◴[] No.41834394[source]
Every decision for an external dependency that a self-hosted service makes is another chance for it to have that impedance mismatch you mentioned.

Postgres is a relatively uncontroversial one, but I had the benefit of working for a company already operating a production postgres cluster where I could easily spin up a new database for a service. I went with SQLite/on disk storage because for most companies, providing a resilient block storage device, with backups, is likely trivial, regardless of which cloud they're on, being on bare metal, etc.

replies(1): >>41834708 #
1. nine_k ◴[] No.41834708[source]
SQLite is fine and dandy as long as you don't do a lot of updates. SQLite locks the entire database for a transaction. It may be fine for quite long, or you may face slowdowns with just a few parallel users, depending on your use case.