Reputation: 191
I've observed a "weird" behavior of the GKE cluster, it seems that resource requests/limits set in pod/deployment are not respected (or wrongly interpreted) by nodes
Any idea what may be the reason for this behaviour and how to solve this? (it cause a lot of issues with resource allocation in cluster)
Example of a pod which runs with 50m
CPU request, which are seen as 250m
by node:
$ kubectl get pod core-worker-6bcf9d4877-5wqpb -n austria -o=jsonpath='{range .spec.containers[*]}{"Container Name: "}{.name}{"\n Requests:\n CPU: "}{.resources.requests.cpu}{"\n Memory: "}{.resources.requests.memory}{"\n Limits:\n CPU: "}{.resources.limits.cpu}{"\n Memory: "}{.resources.limits.memory}{"\n"}{end}'
Container Name: core-worker
Requests:
CPU: 50m
Memory: 256Mi
Limits:
CPU:
Memory: 1Gi
and now the node perspective:
$ kubectl describe node $(kubectl get pod core-worker-6bcf9d4877-5wqpb -n austria -o=custom-columns=NODE:.spec.nodeName)
...
Non-terminated Pods: (12 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
austria core-worker-6bcf9d4877-5wqpb 250m (26%) 500m (53%) 256Mi (5%) 1Gi (21%) 18m
Upvotes: 0
Views: 70
Reputation: 191
Turns out that the describe node
showed the resource limits/request from initContainer
, as pointed out by @RoarS.
Upvotes: 0