←back to thread

466 points blacktechnology | 5 comments | | HN request time: 0s | source
Show context
danpalmer ◴[] No.41834089[source]
Reading the deployment information, there's an interesting tension here with applications that target self-hosting.

Deploying this requires running 5 different open source servers (databases, proxies, etc), and 5 different services that form part of this suite. If I were self-hosting this in a company I now need to be an expert in lots of different systems and potentially how to scale them, back them up, etc. The trade-offs to be made here are very different to when architecting a typical SaaS backend, where this sort of architecture might be fine.

I've been going through this myself with a hobby project. I'm designing it for self-hosting, and it's a radically different way of working to what I'm used to (operating services just for my company). I've been using SQLite and local disk storage so that there's essentially just 2 components to operate and scale – application replicas, and shared disk storage (which is easy to backup too). I'd rather be using Postgres, I'd rather be using numerous other services, background queue processors, etc, but each of those components is something that my users would need to understand, and therefore something to be minimised far more strictly than if it were just me/one team.

Huly looks like a great product, but I'm not sure I'd want to self-host.

replies(28): >>41834100 #>>41834175 #>>41834204 #>>41834282 #>>41834308 #>>41834334 #>>41834356 #>>41834450 #>>41834538 #>>41834603 #>>41834672 #>>41834792 #>>41834861 #>>41834865 #>>41834973 #>>41835133 #>>41835222 #>>41835339 #>>41835929 #>>41835949 #>>41836134 #>>41836856 #>>41836958 #>>41838118 #>>41839489 #>>41840080 #>>41876861 #>>41905212 #
wim ◴[] No.41834308[source]
(Also building a product in the productivity space, with an option for users to self-host the backend)

That's interesting, for us there was actually no trade-off in that sense. Having operated another SaaS with a lot of moving parts (separate DB, queueing, etc), we came to the conclusion rather early on that it would save us a lot of time, $ and hassle if we could just run a single binary on our servers instead. That also happens to be the experience (installation/deployment/maintenance) we would want our users to have if they choose to download our backend and self-host.

Just download the binary, and run it. Another benefit is that it's also super helpful for local development, we can run the actual production server on our own laptop as well.

We're simply using a Go backend with SQLite and local disk storage and it pretty much contains everything we need to scale, from websockets to queues. The only #ifdef cloud_or_self_hosted will probably be that we'll use some S3-like next to a local cache.

replies(4): >>41834355 #>>41834424 #>>41835250 #>>41835405 #
1. stephenr ◴[] No.41835405[source]
> We're simply using a Go backend with SQLite and local disk storage and it pretty much contains everything we need to scale

How do you make it highly available like this? Are you bundling an SQLite replication library with it as well?

replies(1): >>41835576 #
2. wim ◴[] No.41835576[source]
This is probably where the real trade-off is, and for that it's helpful to look at what the actual failure modes and requirements are. Highly available is a range and a moving target of course, with every "extra 9" getting a lot more complicated. Simply swapping SQLite for another database is not going to guarantee all the nines of uptime for specific industry requirements, just like running everything on a single server might already provide all the uptime you need.

In our experience a simple architecture like this almost never goes down and is good enough for the vast majority of apps, even when serving a lot of users. Certainly for almost all apps in this space. Servers are super reliable, upgrades are trivial, and for very catastrophical failures recovery is extremely easy: just copy the latest snapshot of your app directory over to a new server and done. For scaling, we can simply shard over N servers, but that will realistically never be needed for the self-hosted version with the number of users within one organization.

In addition, our app is offline-first, so all clients can work fully offline and sync back any changes later. Offline-first moves some of the complexity from the ops layer to the app layer of course but it means that in practice any rare interruption will probably go unnoticed.

replies(2): >>41835661 #>>41836470 #
3. stephenr ◴[] No.41835661[source]
> Servers are super reliable

We have clearly worked in very different places.

4. hypeatei ◴[] No.41836470[source]
> just copy the latest snapshot of your app directory over to a new server and done.

In practice, wouldn't you need a load balancer in front of your site for that to be somewhat feasible? I can't imagine you're doing manual DNS updates in that scenario because of propagation time.

replies(1): >>41837932 #
5. stephenr ◴[] No.41837932{3}[source]
I don't know for sure but the explanation given sounds very much like they expect it to be a 100% manual process.