Most active commenters
  • cyberpunk(6)
  • sofixa(3)

←back to thread

Go-Safeweb

(github.com)
188 points jcbhmr | 14 comments | | HN request time: 0.984s | source | bottom
Show context
pushupentry1219 ◴[] No.42133267[source]
Not sure how I feel about the HTTPS/TLS related bits. These days anything I write in Go uses plain HTTP, and the TLS is done by a reverse proxy of some variety that does some other stuff with the traffic too including security headers, routing for different paths to different services, etc. I never run a go web application "bare", public facing, and manually supplying cert files.
replies(6): >>42133422 #>>42133588 #>>42133628 #>>42134049 #>>42134283 #>>42135953 #
ongy ◴[] No.42133588[source]
I suspect this is partially from google's internal 0 trust cluster networking.

I.e. even if the communication is entirely between components inside a k8s (or borg) cluster, it should be authenticated and encrypted.

In this model, there may be a reverse proxy at the edge of the cluster, but the communication between this service and the internal services wouls still be https. With systems like cert-manager it's also incredibly easy to supply every in-cluster process with a certificate form the cluster-internal CA.

-- Googler, not related to this project

replies(2): >>42133623 #>>42136458 #
1. cyberpunk ◴[] No.42133623[source]
Why wouldn’t you use istio or cilium for this?
replies(2): >>42133703 #>>42134434 #
2. grogenaut ◴[] No.42133703[source]
Why add another layer if you aren't already using istio or cilium?
replies(1): >>42133841 #
3. cyberpunk ◴[] No.42133841[source]
Because it’s zero configuration auto mtls between all the services in your cluster (or intra-node if cillium) instead of managing a tls cert for every service?
replies(1): >>42133881 #
4. sofixa ◴[] No.42133881{3}[source]
Zero to little configuration at point of use, but a lot of upfront configuration, maintenance, fun issues when you need something slightly less traditional (e.g. something that needs raw TCP or heavens forbid, UDP). Different trade offs for different situations.
replies(1): >>42134052 #
5. cyberpunk ◴[] No.42134052{4}[source]
I still think it’s far less than managing tls per service.

Every component needs a different tls configuration, vs one time installing istio.

Raw TCP is supported by istio even with mtls, you just have to match in your VirtualServices on SNI instead of Host header.

We routinely mix tcp and http services on the same external ports, with mtls for both.

UDP I don’t really see how is relevant to a conversation on tls

replies(1): >>42134342 #
6. sofixa ◴[] No.42134342{5}[source]
> one time installing istio

And never update it afterwards?

> UDP I don’t really see how is relevant to a conversation on tls

You might have UDP services alongside your TCP/HTTP behind TLS.

replies(1): >>42134687 #
7. ongy ◴[] No.42134434[source]
This might be me being daft, but I never quite understood the appeal of doing this with istio. OR also partially just due to the timing of when I started to care about things in k8s world. (Rather recently)

My understanding of that model is that the services themselves still just do unauthenticated HTTP, this gets picked up on the client side by a sidecar, packed into mTLS/HTTPS, authed+unpacked on the server sidecar, then passed as plain HTTP to the server itself.

This is great when we have intra-host vulnerabilities I guess. But doesn't allow to e.g. have code sanitizers that are strict on using TLS properly (google does this).

And while it is a real gain over simple unauthed with untrusted network between nodes, with cilium taking care of the intra-node networking being secure, I don't quite see how the added layer is more useful than using network policies strictly.

(besides some edge cases where it's used to further apply internal authorization based on the identity. Though that trusts the "network" (istio) again.)

replies(2): >>42136717 #>>42144696 #
8. cyberpunk ◴[] No.42134687{6}[source]
At least in our org security let us know when it's time to patch various components and it's typically just a devops chore to bump a helm chart version and merge..

I don't really understand your point; You're trying to say managing a single helm release for istio is more effort than (in my case, for example) manually managing around 40 TLS certificates (and yes, we have an in-house PKI with our own CA that issues via ACME/certbot etc also) and the services that use them? It's clearly not?

Just templating out the config files for e.g Cassandra or ES, or Redis or whatever component takes multiple x the effort of ./helm install istio.

replies(1): >>42135697 #
9. sofixa ◴[] No.42135697{7}[source]
Istio is a notorious pain to maintain, because it has a bunch of dependencies around Kube clusters, so you can't just helm install istio every time there's a new release.
replies(1): >>42136110 #
10. cyberpunk ◴[] No.42136110{8}[source]
That’s not my experience at all and I’ve run hundreds of clusters across multiple cloud providers and on bare metal.

You absolutely can helm upgrade istio, why not?

Can you give any actual examples of this?

11. liveoneggs ◴[] No.42136717[source]
in the modern world extra network hops, novel userland network stacks, and additional cycles of decrypted/re-encrypting traffic make your apps go faster, not slower.
replies(1): >>42140032 #
12. nine_k ◴[] No.42140032{3}[source]
Not sure if it's ironic or not. Because it should be not.

AES-NI gives you encryption at the speed of memcpy basically. Userland network stacks are faster because they don't incur the kernel call cost. With that, if your NIC drivers support zero-copy access, an extra hop to a machine in the same rack over a 10G link is barely noticeable, may be shorter than an L3 miss.

The cost of this is mostly more hardware and more power used, but not much or any additional latency.

replies(1): >>42142051 #
13. liveoneggs ◴[] No.42142051{4}[source]
lol
14. cyberpunk ◴[] No.42144696[source]
For us it’s compliance related first rather than any real security upgrade; we must use mtls between all services (finance) and it’s simply less to manage to use a service mesh.

The cloud provider could read the memory of a k8s node and in theory capture the session keys of two workloads on the same node, and we can’t really protect against that without something like confidential computing.

We get some other benefits for free by using istio though like nice logs, easily sending a chunk of traffic to a new release before rolling it everywhere, or doing fully transparent oauth even in services which don’t support it (oauth2proxy and istio looks at the jwts etc).