←back to thread

466 points blacktechnology | 1 comments | | HN request time: 0.198s | source
Show context
danpalmer ◴[] No.41834089[source]
Reading the deployment information, there's an interesting tension here with applications that target self-hosting.

Deploying this requires running 5 different open source servers (databases, proxies, etc), and 5 different services that form part of this suite. If I were self-hosting this in a company I now need to be an expert in lots of different systems and potentially how to scale them, back them up, etc. The trade-offs to be made here are very different to when architecting a typical SaaS backend, where this sort of architecture might be fine.

I've been going through this myself with a hobby project. I'm designing it for self-hosting, and it's a radically different way of working to what I'm used to (operating services just for my company). I've been using SQLite and local disk storage so that there's essentially just 2 components to operate and scale – application replicas, and shared disk storage (which is easy to backup too). I'd rather be using Postgres, I'd rather be using numerous other services, background queue processors, etc, but each of those components is something that my users would need to understand, and therefore something to be minimised far more strictly than if it were just me/one team.

Huly looks like a great product, but I'm not sure I'd want to self-host.

replies(28): >>41834100 #>>41834175 #>>41834204 #>>41834282 #>>41834308 #>>41834334 #>>41834356 #>>41834450 #>>41834538 #>>41834603 #>>41834672 #>>41834792 #>>41834861 #>>41834865 #>>41834973 #>>41835133 #>>41835222 #>>41835339 #>>41835929 #>>41835949 #>>41836134 #>>41836856 #>>41836958 #>>41838118 #>>41839489 #>>41840080 #>>41876861 #>>41905212 #
letters90 ◴[] No.41834792[source]
I don't really see where you are getting that

https://github.com/hcengineering/huly-selfhost

replies(1): >>41835017 #
matly ◴[] No.41835017[source]
That's actually supporting the posters' argument.

Take a look at all the configs and moving parts checked in this very repo that are needed to run a self-hosted instance. Yes, it is somewhat nicely abstracted away, but that doesn't change the fact that in the kube directory alone [1] there are 10 subfolders with even more config files.

1: https://github.com/hcengineering/huly-selfhost/tree/main/kub...

replies(2): >>41836153 #>>41836479 #
KronisLV ◴[] No.41836153[source]
> Yes, it is somewhat nicely abstracted away, but that doesn't change the fact that in the kube directory alone [1] there are 10 subfolders with even more config files.

That's just what you get with Kubernetes, most of the time. Although powerful and widely utilized, it can be quite... verbose. For a simpler interpretation, you can look at https://github.com/hcengineering/huly-selfhost/blob/main/tem...

There, you have:

  mongodb       supporting service
  minio         supporting service
  elastic       supporting service
  account       their service
  workspace     their service
  front         their service
  collaborator  their service
  transactor    their service
  rekoni        their service
I still would opt for something simpler than that and developing all of the above services would keep multiple teams busy, but the Compose format is actually nice when you want to easily understand what you're looking at.
replies(1): >>41837566 #
1. matly ◴[] No.41837566[source]
As someone that develops native Kubernetes platforms: Providing the raw resources / manifests is almost the worst way of providing a user install. That works great as long as you never have a breaking change in your manifests or any kind of more complex upgrade.

Which brings me back to the initial question: Is this complexity and the external dependencies really needed? For a decently decomposed, highly scalable microservice architecture, maybe. For an Open Source (likely) single tenant management platform? Unlikely.

It highlights the problem of clashing requirements of different target user groups.