←back to thread

354 points geoctl | 3 comments | | HN request time: 0.401s | source

I have been working on Octelium for quite a few years now but it was open sourced only by late May 2025. Octelium, as described more in detail in the repo's README, is simply an open source, self-hosted, unified platform for zero trust resource access that is primarily meant to be a modern alternative to corporate VPNs and remote access tools. It can operate as a remote access/corporate VPN (i.e. alternative to Twingate, Tailscale, OpenVPN Access Server, etc...), a ZTNA/BeyondCorp platform (i.e. alterntive to Cloudflare Access, Teleport, Google BeyondCorp, etc...), and it can also operate as an API/AI gateway, an infrastructure for MCP and A2A architectures and meshes, an ngrok alternative, a homelab infrastructure or even as a more advanced Kubernetes ingress. It's basically designed to operate like a unified Kubernetes-like scalable architecture for zero trust secure/remote access that's suitable for different human-to-workload and workload-to-workload environments. You can read more in detail the full set of main features and links about how it works in the repo's README or directly in the docs https://octelium.com/docs
1. meeby ◴[] No.44419615[source]
I think a lot of other commentors here are more taking issue with the concept of ZTNs in general and may not have used them at scale, or at all. The intro is buzzword heavy, however I don't think that's an issue with your documentation, just the terminology of the space Octelium operates in which came from big business and was conceived as buzzwords to start with so there's not a lot of flexibility to use alternate terms that make sense.

Your documentation is extremely detailed and generally excellent. It does seem to be targeted at people who have already deployed Octelium or are very familiar with ZTN-style deployments. It's quite fractally dense (you stumble over one new term, need to go to another docs page, which is as long, that has more terms you need to read about, etc.) so as you've mentioned the issue really isn't your product but likely conveying what it does in a clear manner.

If you want to get general devs and homelabbers on board with the concept or testing this out, which I imagine is a very different target to your initial versions, maybe you could prefix your GitHub readme with something like:

"Octelium is a free and open source zero trust network platform. It integrates and manages your Kubernetes (and other container orchestration) applications to provide single point of authentication for all services. Your users log in once to one authentication provider such as a managed provider like Okta or any other OAuth OpenID compatible service and then your users are automatically granted their correct access levels to all your web services, databases, SSH, VPNs and more. Log in once, access everything, self-hosted."

When reading your documentation I immediately had a number of questions that were not clearly answered. That doesn't mean to say the answer isn't in your documentation, it's just that after 15-20 minutes of reading I still didn't have a clear answer. I'm reading this from the perspective of someone very familiar with operating Kubernetes clusters at scale and dabbled quite a bit with some of the commercial ZTN offerings. Apologies if the questions below are answered in your docs, I didn't find them in the time I had.

1) Your initial setup guides go from how to install Octelium to immediately scheduling services via YAML as a direct replacement for, I assume, something like deployments on k8s. Does Octelium actually run workloads? Is it 1:1 compatible with k8s API spec? Does it just talk to k8s over the API and effectively rewrite deployment YAML and spam it to k8s? Immediately this has concerns, why do I want this? Do I trust Octelium to manage deployments for me? Replacing a vast part of the k8s stack with Octelium tooling is a big ask for even small companies to trial. There's also just straight upstream connections, why would I want to let Octelium manage workloads over just using an internal k8s service hostname so I don't have to effectively rebuild the entire application around Octelium? Does letting Octelium manage workloads impact anything else (monitoring, logging, any other deployment tools that interact with k8s - if some CI/CD pipeline updates a container image does Octelium "know" or is it out of date?). What about RBAC stuff? Namespaces? Are these 1:1 k8s compatible?

2) If I work for BigCorp I'm going to have things like compliance issues coming out of my ears, your Services store credentials in plain text which is going to be flagged immediately. No-one is going to offload SSH authentication if root SSH keys are stored in plain text in secrets somewhere. I did note there's the option to effectively write your own gRPC interfaces to handle secure secrets storage but this seems like a pretty big hurdle. You then basically say "if you're enterprise we can help here" at the bottom, but I wouldn't even test this myself on a homelab without some sort of more sane basic secret management.

3) How, specifically, does Octelium handle HTTP upstream service issues? For example, if I'm proxying my companies connections to saas.com via saas.myintranet.example.com I assume I lose the ability to apply 2fa to user accounts? It's unclear from the documentation, can I specify unique credentials per user? How would I manage those? Is the only option OAuth and scopes? What if the upstream doesn't support OAuth? It's fine if upstream services need to support OAuth to support the most seamless and secure ZTN-style experience, it's just not that clear how these things work in practice.

4) You re-use a lot of terms from k8s (cluster, service etc.) which are subtly different k8s, this could cause confusion with a lot of k8s trained admins. For example, I believe an Octelium cluster is a k8s cluster running multiple Octelium instances and a custom data plane, this different enough to what a k8s cluster is to potentially be confusing as it's a different logic level. I appreciate there are limited generic descriptive terms in the dictionary to describe these things.

5) How, technically, does some of this stuff work? I know I can browse the repo or read the extensive documentation, but it would be helpful to have some clear "k8s admin"-targeted terms like, we install X proxy that does Y, using Z certificates, services managed by Octelium get a sidecar that intercepts connections to proxy them back into the ZTN, or whatever magic it does in clear, k8s, terms. If I'm looking after something like this I would need to know it inside-out, what if some certificate stops working? If I don't know exactly where everything is how can I debug it? If I deployed this it would be absolutely mission-critical, if it breaks the entire company immediately breaks. If that happens for a couple of days, realistically it can be business-terminating.

What would be really helpful to see would be some documentation with step by step guides on going from a starting point to what an Octelium-powered endgame would look like. For example, if I'm the CTO of a business that has some remote managed k8s clusters running our public SaaS along with some tooling/control panels/etc, and then a big server in my office running some non-critical but important apps via Podman, and 15 remote users that currently connect over a Tailscale VPN into the office, and 20+ upstream external web services that provide support ticketing, email, etc. then what does this look like after I "switch" to Octelium? Before and after comprehensive diagrams, and maybe even a little before-video of someone logging in via a VPN then logging into 15 control panels vs an after-video with someone logging into an intranet and just everything magically appearing would be nice. You need to sell something of this scale to CTOs, not engineers.

Generally, however, my personal biggest flag with this would be that it requires effectively a total rebuild of all infrastructure, learning entirely new tooling, management and oversight. It's a lot to ask and it doesn't seem like it can be that gradually rolled out. I suppose you could set up a second k8s cluster, deploy Octelium on it and then slowly migrate services over, but that's going to be expensive.

Some suggestions:

1) You have built a very impressive toolkit, why not operate this as a SaaS using your toolkit (with on-prem open source extensions)? You don't seem to have an actual product, you have a swiss army knife of ZTN tools which is way more work so hats off to you there. It seems you're partitioning this with "the toolkit is public, write your own glue for free, to get a working platform, come pay us" which is perfectly fine, the issue there is your "enterprise" offerings seem to be even more toolkit options rather than an actual offering. Bundle in some actual enterprise tools (fancy log tool with a web UI that also does user management and connection monitoring as a single dashboard?) and it would be a lot more appealing.

2) You require replacing industry standard technologies like directly managing k8s with kubectl with your own platform and tools - replacing "the way that everyone does X" with "new tool from relatively unknown ZTN provider" is going to be tough to sell. Can you make your octeliumctl tool a kubectl plugin? Could Octelium be less control-everything and just sit on top k8s rather than seemingly replace a lot of it?

Edit: 3) Have a homelabber edition, if that's a market you want to target, that's way easier to install and works in a single docker container or something (sacrificing some of the redundancy) but offers a basic complete suite of services (dashboard, HTTP upstream proxying, VPN) with a single "docker run...".

Good luck with Octelium, it's certainly interesting and I'll keep an eye on it as it evolves.

replies(2): >>44420040 #>>44423302 #
2. geoctl ◴[] No.44420040[source]
Thank you really for your detailed comment. I will try to answer your questions and please don't hesitate to ask in the Slack/Discord channels or contact emails later if the answers here weren't clear enough to you.

1. Octelium Services and Namespaces are not really related or tied to Kubernetes services and namespaces. Octelium resources in general are defined in protobuf3 and compiled to Golang types, and they are stored as serialized JSON in the Postgres main data store simply as JSONB. That said, Octelium Services in specific are actually deployed on the underlying k8s cluster as k8s services/deployments. Octelium resources visually look like k8s resouces (i.e. they both have the same metadata, spec, status structure), however Octelium resources are completely independent of the underlying k8s cluster; they aren't some k8s CRDs as you might initially guess. Also Octelium has its own API server which do some kind of REST-y gRPC-based operations for the different Octelium resources to the Postgres via an intermediate component called the rscServer. As I said, Octelium and k8s resources are completely separate regardless of the visual YAML resemblance. As for managed containers, you don't really have to use it, it's an optional feature, you can deploy your own k8s services via kubectl/helm and use their hostnames as upstreams for Octelium Services to be protected like any other upstream. Managed containers are meant to automate the entire process of spinning up containers/scaling up and down and eventually cleaning up the underlying k8s pods and deployments once you're done with the owner Octelium Service.

2. Secret management in Octelium is by default stored in plaintext. That's a conscious and deliberate decision as declared in the docs because there isn't any one standard way to encrypt Secrets at rest. Mainline Kubernetes itself does exactly the same and provides a gRPC interface for interceptors to implement their own secret management (e.g. HashiCorp Vault, AWS KMS/GCP/Azure offerings, directly to software/hardware-based HSMs, some vault/secret management vendor that I've never heard of, etc...). There is simply no one way to do that, every company has its own regulatory rules, vendors and standards when it comes to secret management at rest. I provided a similar gRPC interface for everybody to intercept such REST operations and implement their own secret management according to their own needs and requirements.

3. Octelium has Session resources https://octelium.com/docs/octelium/latest/management/core/se... Every User can have one or more Sessions, where every Session is represented by an opaque JWT-like access token, which are used internally by the octelium/octeliumctl clients after a successful login, they are also set as a HTTPOnly cookies for BeyondCorp browser-based Sessions and they are used directly as bearer tokens by WORKLOAD Users for client-less access to HTTP-based resources. You can actually set different permissions to different Users and also set different permissions for different Sessions for the exact same User via the owner Credentials or even via your Policies. OAuth2 client credential flow is only intended for WORKLOAD Users. Human Users don't really use OAuth2 client credentials at all. They just login via OIDC/SAML via the web Portal or manually via issued authentication token which is not generally recommended for HUMAN Users. OAuth2 is meant for WORKLOAD Users to securely access HTTP-based Services without using any clients or SDKs. OAuth2 scopes are not really related to zero trust at all as mentioned in the docs. OAuth2 scopes are just an additional way for applications to further restrict their own permissions, not add new ones which are already set by your own Policies.

4. An Octelium Cluster runs on top of a k8s cluster. In a typical multi-node production k8s cluster, you should use at least one node for the control plane and another separate node for the data-plane and scale up if your needs grow. A knowledge with Kubernetes isn't really required to manage an Octelium Cluster. As I mentioned above, Octelium resources and k8s resources are completely separate. One exception when it comes to directly having to deal with the underlying k8s cluster, is setting/updating the Cluster TLS certificate, such cert needs to be fed to the Octelium Cluster via a specific k8s secret as mentioned in the docs. Apart from that, you don't really need to understand anything about the underlying k8s cluster.

5. To explain Octelium more technically from a control plane/data plane perspective: Octelium does with identity-aware proxies similarly to what Kubernetes itself does with containers. It simply builds a whole control plane around the identity-aware proxies which themselves represent and implement the Octelium Services, and automatically deploys them whenever you create an Octelium Service via `octeliumtl apply` commands, scales them up and down whenever you change the Octelium Service replicas and eventually cleans the up whenever you delete the Octelium Service. As I said it's similar to what k8s itself does with containers where your interactions with the k8s API server whether via kubectl or programmatically is enough for the k8s cluster to spin up the containers, do all the CRI/CNI work for you automatically without even having to know or care which node actually runs a specific container.

As for the suggestions:

1. I am not really interested in SaaS offerings myself and you can clearly see that the architecture is meant for self-hosting as opposed to having a separate cloud-based control plane or whatever. I do, however, offer both free and professional support for companies that need to self-host Octelium on their own infrastructure according to their own needs.

2. I am not sure I understand this one. But if I understood you correctly, then as I said above, Octelium resources are completely different and separate from k8s resources regardless of the visual YAML resources. Octelium has its own API server, rscServer, data store, and it does not use CRDs or mess with k8s data store whether it be etcd or something else.

3. mrbluecoat ◴[] No.44423302[source]
Reminds of the confusion when https://openziti.io/ shared on HN a while back. ZTN is a very different beast than WireGuard-derivatives.