I.e. even if the communication is entirely between components inside a k8s (or borg) cluster, it should be authenticated and encrypted.
In this model, there may be a reverse proxy at the edge of the cluster, but the communication between this service and the internal services wouls still be https. With systems like cert-manager it's also incredibly easy to supply every in-cluster process with a certificate form the cluster-internal CA.
-- Googler, not related to this project
Every component needs a different tls configuration, vs one time installing istio.
Raw TCP is supported by istio even with mtls, you just have to match in your VirtualServices on SNI instead of Host header.
We routinely mix tcp and http services on the same external ports, with mtls for both.
UDP I don’t really see how is relevant to a conversation on tls
My understanding of that model is that the services themselves still just do unauthenticated HTTP, this gets picked up on the client side by a sidecar, packed into mTLS/HTTPS, authed+unpacked on the server sidecar, then passed as plain HTTP to the server itself.
This is great when we have intra-host vulnerabilities I guess. But doesn't allow to e.g. have code sanitizers that are strict on using TLS properly (google does this).
And while it is a real gain over simple unauthed with untrusted network between nodes, with cilium taking care of the intra-node networking being secure, I don't quite see how the added layer is more useful than using network policies strictly.
(besides some edge cases where it's used to further apply internal authorization based on the identity. Though that trusts the "network" (istio) again.)
I don't really understand your point; You're trying to say managing a single helm release for istio is more effort than (in my case, for example) manually managing around 40 TLS certificates (and yes, we have an in-house PKI with our own CA that issues via ACME/certbot etc also) and the services that use them? It's clearly not?
Just templating out the config files for e.g Cassandra or ES, or Redis or whatever component takes multiple x the effort of ./helm install istio.
AES-NI gives you encryption at the speed of memcpy basically. Userland network stacks are faster because they don't incur the kernel call cost. With that, if your NIC drivers support zero-copy access, an extra hop to a machine in the same rack over a 10G link is barely noticeable, may be shorter than an L3 miss.
The cost of this is mostly more hardware and more power used, but not much or any additional latency.
The cloud provider could read the memory of a k8s node and in theory capture the session keys of two workloads on the same node, and we can’t really protect against that without something like confidential computing.
We get some other benefits for free by using istio though like nice logs, easily sending a chunk of traffic to a new release before rolling it everywhere, or doing fully transparent oauth even in services which don’t support it (oauth2proxy and istio looks at the jwts etc).