Reputation: 1469
My set up
I have one physical node K8s cluster where I taint master node so it can also act a worker. the node has Centos7 with total of 512 GB memory. I am limiting my experiments to one node cluster; once a solution is found, I will test it on my small scale k8s cluster where master and worker services are on separate nodes.
What I am trying to do
I want to monitor pod level resource utilization (CPU and Memory). I am launching a pod which consumed memory at rate of 1GBPS; in around 100seconds, memory utilization reaches 100GB and at that point application reaches steady state. From that point, it keep running until killed with a trigger.
Where I am right now with this
After launching k8s metric server; I am able to do kubectl top pods
and it shows per pod CPU and memory utilization. However these utilization numbers are not updated frequently. I tried to measure how long does k8s takes to update these telemetry and sampling interval appears to be close to 1 minute or 60 seconds.
I tired looking into https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/ for figuring out various sampling intervals. there are a few parameters which could impact telemetry sampling rate; but they are set to ~20s (defaults) at max. I am not changing any Kubelet settings.
My Question
Why it takes around a minute for kubectl top pods
to update resource utilization numbers.? how can I reduce this interval and have frequent updates.?
Upvotes: 1
Views: 1306
Reputation: 6853
Why it takes around a minute for kubectl top pods
to update resource utilization numbers?
It's because metric server's default resolution which is set to 60s.
How can I reduce this interval and have frequent updates?
You can change the resolution with --metric-resolution=<duration>
flag.
It's not however recommended settings values below 15s as this is the resolution of metrics calculated by Kubelet.
spec:
containers:
- command:
- /metrics-server
- --metric-resolution=15s
Reference: How often metrics are scraped
Upvotes: 2