Vincent GICQUEL
Vincent GICQUEL

Reputation: 21

Increase max pod on Node

docker info : Server Version: 18.06.3-ce

[root@lgn-test-cp65 ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.9", GitCommit:"16236ce91790d4c75b79f6ce96841db1c843e7d2", GitTreeState:"clean", BuildDate:"2019-03-25T06:40:24Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.9", GitCommit:"16236ce91790d4c75b79f6ce96841db1c843e7d2", GitTreeState:"clean", BuildDate:"2019-03-25T06:30:48Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"linux/amd64"}

I'm trying to increase MaxPods in Node but it's not working. I would like to increase MaxPods from 110 to 200

My command

kubelet --max-pods=200 --kubeconfig=/root/.kube/config

And result

[root@lgn-test-cp65 ~]# kubelet --max-pods=200 --kubeconfig=/root/.kube/config
Flag --max-pods has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
I0218 11:16:09.655793   22675 server.go:408] Version: v1.11.9
I0218 11:16:09.656132   22675 plugins.go:97] No cloud provider specified.
W0218 11:16:19.708398   22675 manager.go:251] Timeout trying to communicate with docker during initialization, will retry
I0218 11:16:20.723226   22675 server.go:648] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
I0218 11:16:20.727579   22675 container_manager_linux.go:243] container manager verified user specified cgroup-root exists: []
I0218 11:16:20.727635   22675 container_manager_linux.go:248] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true}
I0218 11:16:20.727862   22675 container_manager_linux.go:267] Creating device plugin manager: true
I0218 11:16:20.727938   22675 state_mem.go:36] [cpumanager] initializing new in-memory state store
I0218 11:16:20.728028   22675 state_mem.go:84] [cpumanager] updated default cpuset: ""
I0218 11:16:20.728043   22675 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]"
I0218 11:16:20.728221   22675 kubelet.go:299] Watching apiserver
I0218 11:16:20.749497   22675 client.go:75] Connecting to docker on unix:///var/run/docker.sock
I0218 11:16:20.749540   22675 client.go:104] Start docker client with request timeout=2m0s
W0218 11:16:20.752947   22675 docker_service.go:545] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
I0218 11:16:20.752989   22675 docker_service.go:238] Hairpin mode set to "hairpin-veth"
I0218 11:16:20.780638   22675 docker_service.go:253] Docker cri networking managed by kubernetes.io/no-op
I0218 11:16:20.939375   22675 docker_service.go:258] Docker Info: &{ID:RS55:WJBO:XXTM:PPAJ:HLUV:UBH7:ZWFK:WERN:ZIHT:PAJ6:2RWI:7WOA Containers:429 ContainersRunning:302 ContainersPaused:0 ContainersStopped:127 Images:59 Driver:devicemapper DriverStatus:[[Pool Name docker-253:0-67351869-pool] [Pool Blocksize 65.54kB] [Base Device Size 10.74GB] [Backing Filesystem xfs] [Udev Sync Supported true] [Data file /dev/loop0] [Metadata file /dev/loop1] [Data loop file /var/lib/docker/devicemapper/devicemapper/data] [Metadata loop file /var/lib/docker/devicemapper/devicemapper/metadata] [Data Space Used 23.34GB] [Data Space Total 107.4GB] [Data Space Available 84.03GB] [Metadata Space Used 62.43MB] [Metadata Space Total 2.147GB] [Metadata Space Available 2.085GB] [Thin Pool Minimum Free Space 10.74GB] [Deferred Removal Enabled true] [Deferred Deletion Enabled true] [Deferred Deleted Device Count 0] [Library Version 1.02.158-RHEL7 (2019-05-13)]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:1358 OomKillDisable:true NGoroutines:1195 SystemTime:2020-02-18T11:16:20.857237756+01:00 LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:1 KernelVersion:3.10.0-1062.9.1.el7.x86_64 OperatingSystem:CentOS Linux 7 (Core) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc42042c070 NCPU:16 MemTotal:67385786368 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:lgn-test-cp65.intranet.novaliance.com Labels:[] ExperimentalBuild:false ServerVersion:18.06.3-ce ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:docker-runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:468a545b9edcd5932818eb9de8e72413e616e86e Expected:468a545b9edcd5932818eb9de8e72413e616e86e} RuncCommit:{ID:a592beb5bc4c4092b1b1bac971afed27687340c5 Expected:a592beb5bc4c4092b1b1bac971afed27687340c5} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default]}
I0218 11:16:20.939536   22675 docker_service.go:271] Setting cgroupDriver to cgroupfs
I0218 11:16:21.050308   22675 kuberuntime_manager.go:186] Container runtime docker initialized, version: 18.06.3-ce, apiVersion: 1.38.0
I0218 11:16:21.050912   22675 csi_plugin.go:138] kubernetes.io/csi: plugin initializing...
E0218 11:16:21.056297   22675 kubelet.go:1252] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data for container /
I0218 11:16:21.057696   22675 server.go:129] Starting to listen on 0.0.0.0:10250
I0218 11:16:21.058671   22675 server.go:302] Adding debug handlers to kubelet server.
F0218 11:16:21.059593   22675 server.go:141] listen tcp 0.0.0.0:10250: bind: address already in use

thanks for your help. I already made the config file and it's not working.

[root@lgn-test-cp65 ~]# kubelet --config=maxpod.yaml --kubeconfig=/root/.kube/config
I0218 13:50:09.199888   26656 server.go:408] Version: v1.11.9
I0218 13:50:09.200319   26656 plugins.go:97] No cloud provider specified.
I0218 13:50:09.374179   26656 server.go:648] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
I0218 13:50:09.393932   26656 container_manager_linux.go:243] container manager verified user specified cgroup-root exists: []
I0218 13:50:09.393977   26656 container_manager_linux.go:248] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true}
I0218 13:50:09.394140   26656 container_manager_linux.go:267] Creating device plugin manager: true
I0218 13:50:09.394223   26656 state_mem.go:36] [cpumanager] initializing new in-memory state store
I0218 13:50:09.394326   26656 state_mem.go:84] [cpumanager] updated default cpuset: ""
I0218 13:50:09.394343   26656 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]"
I0218 13:50:09.394518   26656 kubelet.go:299] Watching apiserver
I0218 13:50:09.399816   26656 client.go:75] Connecting to docker on unix:///var/run/docker.sock
I0218 13:50:09.399857   26656 client.go:104] Start docker client with request timeout=2m0s
W0218 13:50:09.402418   26656 docker_service.go:545] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
I0218 13:50:09.402466   26656 docker_service.go:238] Hairpin mode set to "hairpin-veth"
I0218 13:50:09.408834   26656 docker_service.go:253] Docker cri networking managed by kubernetes.io/no-op
I0218 13:50:09.464778   26656 docker_service.go:258] Docker Info: &{ID:RS55:WJBO:XXTM:PPAJ:HLUV:UBH7:ZWFK:WERN:ZIHT:PAJ6:2RWI:7WOA Containers:424 ContainersRunning:333 ContainersPaused:0 ContainersStopped:91 Images:59 Driver:devicemapper DriverStatus:[[Pool Name docker-253:0-67351869-pool] [Pool Blocksize 65.54kB] [Base Device Size 10.74GB] [Backing Filesystem xfs] [Udev Sync Supported true] [Data file /dev/loop0] [Metadata file /dev/loop1] [Data loop file /var/lib/docker/devicemapper/devicemapper/data] [Metadata loop file /var/lib/docker/devicemapper/devicemapper/metadata] [Data Space Used 23.05GB] [Data Space Total 107.4GB] [Data Space Available 84.32GB] [Metadata Space Used 62.6MB] [Metadata Space Total 2.147GB] [Metadata Space Available 2.085GB] [Thin Pool Minimum Free Space 10.74GB] [Deferred Removal Enabled true] [Deferred Deletion Enabled true] [Deferred Deleted Device Count 0] [Library Version 1.02.158-RHEL7 (2019-05-13)]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:1231 OomKillDisable:true NGoroutines:933 SystemTime:2020-02-18T13:50:09.448721588+01:00 LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:1 KernelVersion:3.10.0-1062.9.1.el7.x86_64 OperatingSystem:CentOS Linux 7 (Core) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc4204089a0 NCPU:16 MemTotal:67385786368 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:lgn-test-cp65.intranet.novaliance.com Labels:[] ExperimentalBuild:false ServerVersion:18.06.3-ce ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:docker-runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:468a545b9edcd5932818eb9de8e72413e616e86e Expected:468a545b9edcd5932818eb9de8e72413e616e86e} RuncCommit:{ID:a592beb5bc4c4092b1b1bac971afed27687340c5 Expected:a592beb5bc4c4092b1b1bac971afed27687340c5} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default]}
I0218 13:50:09.464926   26656 docker_service.go:271] Setting cgroupDriver to cgroupfs
I0218 13:50:09.506807   26656 kuberuntime_manager.go:186] Container runtime docker initialized, version: 18.06.3-ce, apiVersion: 1.38.0
I0218 13:50:09.507488   26656 csi_plugin.go:138] kubernetes.io/csi: plugin initializing...
I0218 13:50:09.508605   26656 server.go:129] Starting to listen on 0.0.0.0:10250
E0218 13:50:09.508656   26656 kubelet.go:1252] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data for container /
I0218 13:50:09.508868   26656 server.go:986] Started kubelet
E0218 13:50:09.509041   26656 server.go:733] Starting health server failed: listen tcp 127.0.0.1:10248: bind: address already in use
I0218 13:50:09.509923   26656 server.go:302] Adding debug handlers to kubelet server.
F0218 13:50:09.510787   26656 server.go:141] listen tcp 0.0.0.0:10250: bind: address already in use

maxpod.yaml

apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
maxPods: 200

Upvotes: 2

Views: 4719

Answers (2)

Malgorzata
Malgorzata

Reputation: 7031

Flag --max-pods should be set via the config file specified by the Kubelet's --config flag.

You have to create specific kubelet configuration.

apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
maxPods: 200

Save file and apply changes.

$ kubelet --config=your-kubelet-conf.yaml --kubeconfig=/root/.kube/config

Remember that the configuration file must be a JSON or YAML representation of the parameters in this struct. Make sure the Kubelet has read permissions on the file.

You can also change a Node’s configSource using several different mechanisms. Example use of kubectl patch:

$ kubectl patch node ${NODE_NAME} -p "{\"spec\":{\"configSource\":{\"configMap\":{\"name\":\"${CONFIG_MAP_NAME}\",\"namespace\":\"kube-system\",\"kubeletConfigKey\":\"kubelet\"}}}}"

Take a look here: configmap-kubelet, kubelet-configuration.

Upvotes: 3

Mohammad Fahim Abrar
Mohammad Fahim Abrar

Reputation: 539

--max-pods has been deprecated. Please use --config flag.

First, save the following content in a file.

apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
maxPods: 200

Then, pass it via kubelet.

kubelet --config=<file.yaml> --kubeconfig=/root/.kube/config

Upvotes: 1

Related Questions