I wonder if it is possible with ipv6 to never (or you roll through the addresses so reuse is temporally distant) re use addresses which removes the problems with staleness and false attribution.
I wonder if it is possible with ipv6 to never (or you roll through the addresses so reuse is temporally distant) re use addresses which removes the problems with staleness and false attribution.
"I wonder if it is possible with ipv6 to never... re use addresses which removes the problems with staleness and false attribution."
Most VPCs (also AWS) don’t currently support "true" IPv6 scaleout behavior. Buttt!! if IPs were truly immutable and unique per workload, attribution becomes trivial. It’s just not yet realistic... maybe something to explore with the lads?
> Most VPCs (also AWS) don’t currently support "true" IPv6 scaleout behavior.
Thats a shame.
> if IPs were truly immutable and unique per workload, attribution becomes trivial
I would like to see that. IPAM for multi-tenant workloads always felt like a kludge. You need the network to understand how to route to a workloads, but the network when running on ipv4 has many more workloads than addresses. If you assign immutable addresses per workload (or say it takes you a month to chew through your ipv6 address space) it makes it so the network natively knows how to route to workloads without the need to kludge with IP reassignments.
I have had to deal with IP address pools being exhausted due to high pod churn in EC2 a number of times and it is always a pain.
Immutable addressing per workload with IPv6 feels like such a clean mental model, especially for attribution, routing, and observability.
Curious if you have seen anyone pull that off cleanly in production, like truly immutable addressing at scale? Curious if it’s been battle tested somewhere or still mostly an ideal.
Hypothetically it is not hard, you split your ipv6 prefix per datacenter. Then you use etcd to coordinate access to the ipv6 pool to hand out immutable addresses. You just start from the lowest address and go to the highest address. If you get to the highest address you go back to the lowest address, as long as your churn is not too high and your pool is big enough you should only wrap addresses far enough apart in time that address reuse doesn't cause any problems with false attribution.
In the etcd store you can just store KV pairs of ipv6 -> workload ID. If you really want to be fancy you can watch those KV pairs using etcd clients and get live updates of new addresses being assigned to workloads. You can plug these updates into your system of choice which needs to map ipv6 to workload such as network flow tools.
Unless you are doing something insane, you should easily be able to keep up with immutable address requests with a DC local etcd quorum.