Most active commenters
  • JohnMakin(3)

←back to thread

152 points rgun | 11 comments | | HN request time: 0.427s | source | bottom
1. JohnMakin ◴[] No.46145088[source]
Having spent most of my career in kubernetes (usually managed by cloud), I always wonder when I see things like this, what is the use case or benefit of not having a control plane?

To me, the control plane is the primary feature of kubernetes and one I would not want to go without.

I know this describes operational overhead as a reason, but how it relates to the control plane is not clear to me. even managing a few hundred nodes and maybe 10,000 containers, relatively small - I update once a year and the managed cluster updates machine images and versions automatically. Are people trying to self host kubernetes for production cases, and that’s where this pain comes from?

Sorry if it is a rude question.

replies(5): >>46145152 #>>46145315 #>>46145389 #>>46145675 #>>46146251 #
2. baq ◴[] No.46145152[source]
> Are people trying to self host kubernetes

Of course they are…? That’s half the point of k8s - if you want to self host, you can, but it’s just like backups: if you never try it, you should assume you can’t do it when you need to

3. psviderski ◴[] No.46145315[source]
Not rude at all. The benefit is a much simpler model where you simply connect machines in a network where every machine is equal. You can add more, remove some. No need to worry about an HA 3-node centralised “cluster brain”. There isn’t one.

It’s a similar experience when a cloud provider manages the control plane for you. But you have to worry about the availability when you host everything yourself. Losing etcd quorum results in an unusable cluster.

Many people want to avoid this, especially when running at a smaller scale like a handful of machines.

The cluster network can even partition and each partition continues to operate allowing to deploy/update apps individually.

That’s essentially what we all did in a pre-k8s era with chef and ansible but without the boilerplate and reinventing the wheel, and using the learnings from k8s and friends.

replies(2): >>46145401 #>>46146050 #
4. kelnos ◴[] No.46145389[source]
> a few hundred nodes and maybe 10,000 containers, relatively small

That feels not small to me. For something I'm working on I'll probably have two nodes and around 10 containers. If it works out and I get some growth, maybe that will go up to, say, 5-7 nodes and 30 or so containers? I dunno. I'd like some orchestration there, but k8s feels way too heavy even for my "grown" case.

I feel like there are potentially a lot of small businesses at this sort of scale?

5. _joel ◴[] No.46145401[source]
k3s uses sqlite, so not etcd.
replies(1): >>46146352 #
6. esseph ◴[] No.46145675[source]
Try it on bare metal where you're managing the distributed storage and the hardware and the network and the upgrades too :)
replies(2): >>46146012 #>>46146025 #
7. JohnMakin ◴[] No.46146012[source]
Why would you want to do that though?

On cloud, in my experience, you are mostly paying for compute with managed kubernetes instances. The overhead and price is almost never kubernetes itself, but the compute and storage you are provisioning, which, thanks to the control plane, you have complete control over. what am i missing?

I wouldn’t dare try to with a small shop try to self host a production kubernetes solution unless i was under duress. But I just dont see what the control plane has to do with it. It’s the feature that makes kubernetes worth it.

8. lillecarl ◴[] No.46146025[source]
Tinkerbell / MetalKube, ClusterAPI, Rook, Cilium?

A control plane makes controlling machines easier, that's the point of a control plane.

9. JohnMakin ◴[] No.46146050[source]
If you are a small operation and trying to self host k3s or k8s or any number of out of the box installations that are probably at least as complex as docker compose swarms, for any non trivial production case, presents similar problems in monitoring and availability as ones you’d get with off the shelf cloud provider managed services, except the managed solutions come without the pain in the ass. Except you don’t have a control plane.

I have managed custom server clusters in a self hosted situation. the problems are hard, but if you’re small, why would you reach for such a solution in the first place? you’d be better off paying for a managed service. What situation forces so many people to reach to self hosted kubernetes?

10. motoboi ◴[] No.46146251[source]
Kubernetes is not only an orchestrator but a scheduler.

Is a way to run arbitrary processes on a bunch of servers.

But what if your processes are known beforehand? Than you don't need a scheduler, nor an orchestrator.

If it's just your web app with two containers and nothing more?

11. davidgl ◴[] No.46146352{3}[source]
It can use sqlite (single master), or for cluster it can use pg, or mysql, but etcd by default