I.e. even if the communication is entirely between components inside a k8s (or borg) cluster, it should be authenticated and encrypted.
In this model, there may be a reverse proxy at the edge of the cluster, but the communication between this service and the internal services wouls still be https. With systems like cert-manager it's also incredibly easy to supply every in-cluster process with a certificate form the cluster-internal CA.
-- Googler, not related to this project
My understanding of that model is that the services themselves still just do unauthenticated HTTP, this gets picked up on the client side by a sidecar, packed into mTLS/HTTPS, authed+unpacked on the server sidecar, then passed as plain HTTP to the server itself.
This is great when we have intra-host vulnerabilities I guess. But doesn't allow to e.g. have code sanitizers that are strict on using TLS properly (google does this).
And while it is a real gain over simple unauthed with untrusted network between nodes, with cilium taking care of the intra-node networking being secure, I don't quite see how the added layer is more useful than using network policies strictly.
(besides some edge cases where it's used to further apply internal authorization based on the identity. Though that trusts the "network" (istio) again.)
The cloud provider could read the memory of a k8s node and in theory capture the session keys of two workloads on the same node, and we can’t really protect against that without something like confidential computing.
We get some other benefits for free by using istio though like nice logs, easily sending a chunk of traffic to a new release before rolling it everywhere, or doing fully transparent oauth even in services which don’t support it (oauth2proxy and istio looks at the jwts etc).