Most active commenters
  • psviderski(8)
  • olegp(3)
  • jabr(3)

←back to thread

152 points rgun | 35 comments | | HN request time: 0.919s | source | bottom
1. psviderski ◴[] No.46144570[source]
Hey, creator here. Thanks for sharing this!

Uncloud[0] is a container orchestrator without a control plane. Think multi-machine Docker Compose with automatic WireGuard mesh, service discovery, and HTTPS via Caddy. Each machine just keeps a p2p-synced copy of cluster state (using Fly.io's Corrosion), so there's no quorum to maintain.

I’m building Uncloud after years of managing Kubernetes in small envs and at a unicorn. I keep seeing teams reach for K8s when they really just need to run a bunch of containers across a few machines with decent networking, rollouts, and HTTPS. The operational overhead of k8s is brutal for what they actually need.

A few things that make it unique:

- uses the familiar Docker Compose spec, no new DSL to learn

- builds and pushes your Docker images directly to your machines without an external registry (via my other project unregistry [1])

- imperative CLI (like Docker) rather than declarative reconciliation. Easier mental model and debugging

- works across cloud VMs, bare metal, even a Raspberry Pi at home behind NAT (all connected together)

- minimal resource footprint (<150MB ram)

[0]: https://github.com/psviderski/uncloud

[1]: https://github.com/psviderski/unregistry

replies(11): >>46144726 #>>46144768 #>>46144784 #>>46144846 #>>46144978 #>>46145074 #>>46145335 #>>46145652 #>>46145808 #>>46146155 #>>46146244 #
2. olegp ◴[] No.46144726[source]
How's this similar to and different from Kamal? https://kamal-deploy.org/
replies(1): >>46144913 #
3. topspin ◴[] No.46144768[source]
"I keep seeing teams reach for K8s when they really just need to run a bunch of containers across a few machines"

Since k8s is very effective at running a bunch of containers across a few machines, it would appear to be exactly the correct thing to reach for. At this point, running a small k8s operation, with k3s or similar, has become so easy that I can't find a rational reason to look elsewhere for container "orchestration".

replies(5): >>46144793 #>>46144975 #>>46145092 #>>46145301 #>>46145392 #
4. mosselman ◴[] No.46144784[source]
You have a graph that shows a multi provider setup for a domain. Where would routing to either machine happen? As in which ip would you use on the dns side?
replies(1): >>46145223 #
5. nullpoint420 ◴[] No.46144793[source]
100%. I’m really not sure why K8S has become the complexity boogeyman. I’ve seen CDK apps or docker compose files that are way more difficult to understand than the equivalent K8S manifests.
replies(1): >>46145645 #
6. woile ◴[] No.46144846[source]
does it support ipv6?
replies(1): >>46144939 #
7. psviderski ◴[] No.46144913[source]
I took some inspiration from Kamal, e.g. the imperative model but kamal is more a deployment tool.

In addition to deployments, uncloud handles clustering - connects machines and containers together. Service containers can discover other services via internal DNS and communicate directly over the secure overlay network without opening any ports on the hosts.

As far as I know kamal doesn’t provide an easy way for services to communicate across machines.

Services can also be scaled to multiple replicas across machines.

replies(2): >>46144944 #>>46146343 #
8. psviderski ◴[] No.46144939[source]
There is an open issue that confirms enabling ipv6 for containers works: https://github.com/psviderski/uncloud/issues/126 But this hasn’t been enabled by default.

What specifically do you mean by ipv6 support?

replies(1): >>46145257 #
9. olegp ◴[] No.46144944{3}[source]
Thanks! I noticed afterwards that you mention Kamal in your readme, but you may want to add a comparison section that you link to where you compare your solution to others.

Are you working on this full time and if so, how are you funding it? Are you looking to monetize this somehow?

replies(1): >>46145037 #
10. psviderski ◴[] No.46144975[source]
That’s awesome if k3s works for you, nothing wrong with this. You’re simply not the target user then.
11. zbuttram ◴[] No.46144978[source]
Very cool! I think I'll have some opportunity soon to give it a shot, I have just the set of projects that have been needing a tool like this. One thing I think I'm missing after perusing the docs however is, how does one onboard other engineers to the cluster after it has been set up? And similarly, how does deployment from a CI/CD runner work? I don't see anything about how to connect to an existing cluster from a new machine, or at least not that I'm recognizing.
replies(1): >>46145191 #
12. psviderski ◴[] No.46145037{4}[source]
Thank you for the suggestion!

I’m working full time on this, yes. Funding from my savings at the moment and don’t have plans for any external funding or VC.

For monetisation, considering building a self-hosted and managed (SaaS) webUI for managing remote clusters and apps on them with value-added PaaS-like features.

replies(1): >>46145056 #
13. olegp ◴[] No.46145056{5}[source]
That sounds interesting, maybe I could help on the business side of things somehow. I'll email you my calendar link.
replies(1): >>46145731 #
14. utopiah ◴[] No.46145074[source]
Neat, as you include quite a few tool for services to be reachable together (not necessarily to the outside), do you also have tooling to make those services more interoperable?
replies(1): >>46145494 #
15. jabr ◴[] No.46145092[source]
I can only speak for myself, but I considered a few options, including "simple k8s" like [Skate](https://skateco.github.io/), and ultimately decided to build on uncloud.

It was as much personal "taste" than anything, and I would describe the choice as similar to preferring JSON over XML.

For whatever reason, kubernetes just irritates me. I find it unpleasant to use. And I don't think I'm unique in that regard.

16. jabr ◴[] No.46145191[source]
There isn't a cli function for adding a connection (independently of adding a new machine/node) yet, but they are in a simple config file (`~/.config/uncloud/config.yaml`) that you can copy or easily create manually for now. It looks like this:

    current_context: default
    contexts:
      default:
        connections:
          - ssh: admin@192.168.0.10
            ssh_key_file: ~/.ssh/uncloud
          - ssh: admin@192.168.0.11
            ssh_key_file: ~/.ssh/uncloud
          - ssh: administrator@93.x.x.x
            ssh_key_file: ~/.ssh/uncloud
          - ssh: sysadmin@65.x.x.x
            ssh_key_file: ~/.ssh/uncloud
And you really just need one entry for typical use. The subsequent entries are only used if the previous node(s) are down.
replies(1): >>46146082 #
17. calgoo ◴[] No.46145223[source]
Not OP, but you could do "simple" dns load balancing between both endpoints.
18. miyuru ◴[] No.46145257{3}[source]
> What specifically do you mean by ipv6 support?

This question does not make sense. This is equivalent to asking "What specifically do you mean by ipv4 support"

These days both protocols must be supported, and if there is a blocker it should be clearly mentioned.

replies(1): >>46145854 #
19. matijsvzuijlen ◴[] No.46145301[source]
If you already know k8s, this is probably true. If you don't it's hard to know what bits you need, and need to learn about, to get something simple set up.
20. unixfox ◴[] No.46145335[source]
Awesome tool! Does it provide some basic features that you would get from running a control plane.

Like rescheduling automatically a container on another server if a server is down? Deploying on the less filled server first if you have set limits in your containers?

21. _joel ◴[] No.46145392[source]
Indeed, it seems a knee jerk response without justification. k3s is pretty damn minimal.
22. jabr ◴[] No.46145494[source]
Do you have an example of what you mean? I'm not entirely clear on your question.
23. esseph ◴[] No.46145645{3}[source]
Managing hundreds or thousands of containers across hundreds or thousands of k8s nodes has a lot of operational challenges.

Especially in-house on bare metal.

replies(3): >>46146023 #>>46146036 #>>46146151 #
24. avan1 ◴[] No.46145652[source]
Thanks for the both great tools. just i didn't understand one thing ? the request flow, imaging we have 10 servers where we choose this request goes to server 1 and the other goes to 7 for example. and since its zero down time, how it says server 5 is updating so till it gets up no request should go there.
replies(1): >>46145963 #
25. psviderski ◴[] No.46145731{6}[source]
Awesome, will reach out!
26. doctorpangloss ◴[] No.46145808[source]
haha, uncloud does have a control plane: the mind of the person running "uc" CLI commands

> I’m building Uncloud after years of managing Kubernetes

did you manage Kubernetes, or did you make the fateful mistake of managing microk8s?

27. justincormack ◴[] No.46145854{4}[source]
How do you want to allocate ipv6 addresses to containers? Turns out there are lots of answers. Some people even want to do ipv6 NAT.
28. psviderski ◴[] No.46145963[source]
I think there are two different cases here. Not sure which one you’re talking about.

1. External requests, e.g. from the internet via the reverse proxy (Caddy) running in the cluster.

The rollout works on the container, not the server level. Each container registers itself in Caddy so it knows which containers to forward and distribute requests to.

When doing a rollout, a new version of container is started first, registers in caddy, then the old one is removed. This is repeated for each service container. This way, at any time there are running containers that serve requests.

It doesn’t say any server that requests shouldn’t go there. It just updates upstreams in the caddy config to send requests to the containers that are up and healthy.

2. Service to service requests within the cluster. In this case, a service DNS name is resolved to a list of IP addresses (running containers). And the client decides which one to send a request to or whether to distribute requests among them.

When the service is updated, the client needs to resolve the name again to get the up-to-date list of IPs. Many http clients handle this automatically so using http://service-name as an endpoint typically just works. But zero downtime should still be handled by the client in this case.

29. lnenad ◴[] No.46146023{4}[source]
But that's not what anyone is arguing here, nor what (to me it seems at least) uncloud is about. It's about simpler HA multinode setup with a single/low double digit containers.
30. sceptic123 ◴[] No.46146036{4}[source]
I don't think that argument matches with they "just need to run a bunch of containers across a few machines"
31. psviderski ◴[] No.46146082{3}[source]
For CI/CD, check out this GitHub Action: https://github.com/thatskyapplication/uncloud-action.

You can either specify one of the machine SSH target in the config.yaml or pass it directly to the 'uc' CLI command, e.g.

uc --connect user@host deploy

32. nullpoint420 ◴[] No.46146151{4}[source]
Talos has made this super easy in my experience.
33. oulipo2 ◴[] No.46146155[source]
So it's a kind of better Docker Swarm? It's interesting, but honestly I'd rather have something declarative, so I can use it with Pulumi, would it be complicated to add a declarative engine on top of the tool? Which discovers what services are already up, do a diff with the new declaration, and handles changes?
34. tex0 ◴[] No.46146244[source]
This is a cool tool, I like the idea. But the way `uc machine init` works under the hood is really scary. Lot's of `curl | bash` run as root.

While I would love to test this tool, this is not something I would run on any machine :/

35. cpursley ◴[] No.46146343{3}[source]
This is neat, regarding clustering - can this work with distributed erlang/elixir?