ankit patel
ankit patel

Reputation: 1469

changing memory allocation of a kubernetes worker node

My set up

I have one physical node K8s cluster where I taint master node so it can also act a worker. the node has Centos7 with total of 512 GB memory. I am limiting my experiments to one node cluster; once a solution is found, I will test it on my small scale k8s cluster where master and worker services are on separate nodes.

What I am trying to do

I want k8s worker to use only 256GB of memory to begin with. later on with if a particular node condition is met I want to increase memory allocation for k8s worker to (lets say) 400GB

Where I am right now with this

  1. I reboot my machine and node is up with full 512 GB memory.
  2. I use chmem -d <range> to offline 256 GB memory. Now OS only sees 256 GB memory is available.
  3. I follow steps from kubeadm init to kubectl taint nodes --all node-role.kubernetes.io/master-
  4. My single node K8s set is up and I can deploy upto 2 pods. each pod requests 100Gi and limited to 200Gi memory usage. Pod are executing sleep 100000. so there is no memory stress.
  5. When I try to launch third pod; pod is stuck in pending state because scheduler detects that the only worker node its managing is out of allocatable memory resource. This make sense. the third pod is simply stuck in pending state forever.
  6. Now after some time; the node met the required condition; at this point I used chmem -e <range> to enable some memory and now OS sees 400GB.
  7. At this point I want to make k8s worker aware of this memory resource capacity change so the third pod stuck in pending stage can be deployed.
  8. This is where I need your help. how can I update memory resource capacity of a worker without restarting my k8s. If I restart the cluster, it can see 400GB memory; but it means I need to kill already running pods and killing already running pod is not acceptable.

Upvotes: 0

Views: 7372

Answers (2)

Bernard Halas
Bernard Halas

Reputation: 1190

It's a long shot, but you can try to restart kubelet via systemctl restart kubelet. The containers should not be restarted this way and there's a hope that once restarted, it'll notice the increased memory configuration.

Upvotes: 2

Sagar Velankar
Sagar Velankar

Reputation: 855

  • Please provide output of below command
kubectl describe nodes
  • Restarting kubelet with below command should work because kubelet recalculates the allocatable cpu and memory every time it starts
systemctl restart kubelet
  • Did you try removing and recreating the third pod after restarting kubelet ?

Upvotes: 1

Related Questions