←back to thread

205 points anurag | 6 comments | | HN request time: 0s | source | bottom
1. nitinreddy88 ◴[] No.45767099[source]
The other way to look is why adding NS label is causing so much memory footprint in Kubernetes. Shouldn't be fixing that (could be much bigger design change), will benefit whole Kube community?
replies(1): >>45773639 #
2. bstack ◴[] No.45773639[source]
Author here: yeah that's a good point. tbh I was mostly unfamiliar with Vector so I took the shortest path to the goal but that could be interesting followup. It does seem like there's a lot of bytes per namespace!
replies(1): >>45779035 #
3. stackskipton ◴[] No.45779035[source]
You mentioned in the blog article that it's doing listwatch. List Watch registers with Kubernetes API that get a list of all objects AND get a notification when anything in object you have registered with changes. A bunch of Vector Pods saying "Hey, send me a notification when anything with namespaces changes" and poof goes your Memory keeping track of who needs to know what.

At this point, I wonder if instead of relying on daemonsets, you just gave every namespace a vector instance that was responsible for that namespace and pods within. ElasticSearch or whatever you pipe logging data to might not be happy with all those TCP connections.

Just my SRE brain thoughts.

replies(1): >>45779195 #
4. fells ◴[] No.45779195{3}[source]
>you just gave every namespace a vector instance that was responsible for that namespace and pods within.

Vector is a daemonset, because it needs to tail the log files on each node. A single vector per namespace might not reside on the nodes that each pod is on.

replies(2): >>45782091 #>>45782741 #
5. stackskipton ◴[] No.45782091{4}[source]
I think DaemonSet is to reduce network load so Vector is not pulling logs files over the network.

We run Vector as Daemonset as well but we don't have a ton of namespaces. Render sounds like they have a ton of namespaces running maybe one or two pods since their customers are much smaller. This is probably much more niche setup then many users of Kubernetes.

6. ahoka ◴[] No.45782741{4}[source]
That's where the design is wrong.