←back to thread

466 points blacktechnology | 7 comments | | HN request time: 1.408s | source | bottom
Show context
danpalmer ◴[] No.41834089[source]
Reading the deployment information, there's an interesting tension here with applications that target self-hosting.

Deploying this requires running 5 different open source servers (databases, proxies, etc), and 5 different services that form part of this suite. If I were self-hosting this in a company I now need to be an expert in lots of different systems and potentially how to scale them, back them up, etc. The trade-offs to be made here are very different to when architecting a typical SaaS backend, where this sort of architecture might be fine.

I've been going through this myself with a hobby project. I'm designing it for self-hosting, and it's a radically different way of working to what I'm used to (operating services just for my company). I've been using SQLite and local disk storage so that there's essentially just 2 components to operate and scale – application replicas, and shared disk storage (which is easy to backup too). I'd rather be using Postgres, I'd rather be using numerous other services, background queue processors, etc, but each of those components is something that my users would need to understand, and therefore something to be minimised far more strictly than if it were just me/one team.

Huly looks like a great product, but I'm not sure I'd want to self-host.

replies(28): >>41834100 #>>41834175 #>>41834204 #>>41834282 #>>41834308 #>>41834334 #>>41834356 #>>41834450 #>>41834538 #>>41834603 #>>41834672 #>>41834792 #>>41834861 #>>41834865 #>>41834973 #>>41835133 #>>41835222 #>>41835339 #>>41835929 #>>41835949 #>>41836134 #>>41836856 #>>41836958 #>>41838118 #>>41839489 #>>41840080 #>>41876861 #>>41905212 #
1. letters90 ◴[] No.41834792[source]
I don't really see where you are getting that

https://github.com/hcengineering/huly-selfhost

replies(1): >>41835017 #
2. matly ◴[] No.41835017[source]
That's actually supporting the posters' argument.

Take a look at all the configs and moving parts checked in this very repo that are needed to run a self-hosted instance. Yes, it is somewhat nicely abstracted away, but that doesn't change the fact that in the kube directory alone [1] there are 10 subfolders with even more config files.

1: https://github.com/hcengineering/huly-selfhost/tree/main/kub...

replies(2): >>41836153 #>>41836479 #
3. KronisLV ◴[] No.41836153[source]
> Yes, it is somewhat nicely abstracted away, but that doesn't change the fact that in the kube directory alone [1] there are 10 subfolders with even more config files.

That's just what you get with Kubernetes, most of the time. Although powerful and widely utilized, it can be quite... verbose. For a simpler interpretation, you can look at https://github.com/hcengineering/huly-selfhost/blob/main/tem...

There, you have:

  mongodb       supporting service
  minio         supporting service
  elastic       supporting service
  account       their service
  workspace     their service
  front         their service
  collaborator  their service
  transactor    their service
  rekoni        their service
I still would opt for something simpler than that and developing all of the above services would keep multiple teams busy, but the Compose format is actually nice when you want to easily understand what you're looking at.
replies(1): >>41837566 #
4. wruza ◴[] No.41836479[source]
We can also take a look at the linux kernel that powers the docker instances and faint in terror.

These “moving parts” are implementation details which (iiuc) require no maintenance apart from backing up via some obvious solutions. Didn’t they make docker to stop worrying about exactly this?

And you don’t need multiple roles, specialists or competences for that, it’s a one-time task for a single sysop who can google and read man. These management-spoiled ideas will hire one guy for every explicitly named thing. Tell them you’re using echo and printf and they rush to search for an output-ops team.

replies(1): >>41837640 #
5. matly ◴[] No.41837566{3}[source]
As someone that develops native Kubernetes platforms: Providing the raw resources / manifests is almost the worst way of providing a user install. That works great as long as you never have a breaking change in your manifests or any kind of more complex upgrade.

Which brings me back to the initial question: Is this complexity and the external dependencies really needed? For a decently decomposed, highly scalable microservice architecture, maybe. For an Open Source (likely) single tenant management platform? Unlikely.

It highlights the problem of clashing requirements of different target user groups.

6. matly ◴[] No.41837640{3}[source]
These moving parts require active understanding and maintenance, as they will change on each and every upgrade, which also requires manual upgrade steps and potential debugging on breaking changes. OCI images let you worry less about dependencies, but what they don't eliminate is debugging and/or upgrading k8s configuration manifests (which we are looking at here).

> We can also take a look at the linux kernel that powers the docker instances and faint in terror.

Sure, and computers are rocks powered by lightning - very, very frighting. That doesn't invalidate criticism about the usability and design of this very product my friend.

replies(1): >>41838842 #
7. wruza ◴[] No.41838842{4}[source]
These moving parts require active understanding and maintenance, as they will change on each and every upgrade, which also requires manual upgrade steps and potential debugging on breaking changes

Maybe they won’t change or migrations will be backwards-compatible. We don’t know that in general. Pretty sure all the software installed on my PC uses numerous databases. But somehow I never upgraded them manually. I find the root position overdefensive at best.

If it were a specific criticism, fine. But it uses lots of assumptions as far as I can tell, cause it references no mds, configs, migrations, etc. It only projects a general idea about issues someone had at their org in some situation. This whole “moving parts” idiom is management speak. You either see a specific problem with a specific setup, or have to look inside to see it. Everything else is fortune telling.