Reputation: 1469
While initializing kubeadm
I am getting following errors. I have also tried command kubeadm reset
before doing kubadm init
. Kubelet is also running and command I have used for same is systemctl enable kubelet && systemctl start kubelet
. Following is log after executing kubeadm init
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.8.2
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks
[preflight] WARNING: Connection to "https://192.168.78.48:6443" uses proxy "http://user:[email protected]:3128/". If that is not intended, adjust your proxy settings
[preflight] WARNING: Running with swap on is not supported. Please disable swap or set kubelet's --fail-swap-on flag to false.
[preflight] Starting the kubelet service
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [steller.india.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.140.48]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] This often takes around a minute; or longer if the control plane images have to be pulled.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp [::1]:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp [::1]:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp [::1]:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp [::1]:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp [::1]:10255: getsockopt: connection refused.
Following is output of journalctl -u kubelet
-- Logs begin at Thu 2017-11-02 16:20:50 IST, end at Fri 2017-11-03 17:11:12 IST. --
Nov 03 16:36:48 steller.india.com systemd[1]: Started kubelet: The Kubernetes Node Agent.
Nov 03 16:36:48 steller.india.com systemd[1]: Starting kubelet: The Kubernetes Node Agent...
Nov 03 16:36:48 steller.india.com kubelet[52511]: I1103 16:36:48.998467 52511 feature_gate.go:156] feature gates: map[]
Nov 03 16:36:48 steller.india.com kubelet[52511]: I1103 16:36:48.998532 52511 controller.go:114] kubelet config controller: starting controller
Nov 03 16:36:48 steller.india.com kubelet[52511]: I1103 16:36:48.998536 52511 controller.go:118] kubelet config controller: validating combination of defaults and flag
Nov 03 16:36:49 steller.india.com kubelet[52511]: I1103 16:36:49.837248 52511 client.go:75] Connecting to docker on unix:///var/run/docker.sock
Nov 03 16:36:49 steller.india.com kubelet[52511]: I1103 16:36:49.837282 52511 client.go:95] Start docker client with request timeout=2m0s
Nov 03 16:36:49 steller.india.com kubelet[52511]: W1103 16:36:49.839719 52511 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Nov 03 16:36:49 steller.india.com systemd[1]: kubelet.service: main process exited, code=exited, status=1/FAILURE
Nov 03 16:36:49 steller.india.com kubelet[52511]: I1103 16:36:49.846959 52511 feature_gate.go:156] feature gates: map[]
Nov 03 16:36:49 steller.india.com kubelet[52511]: W1103 16:36:49.847216 52511 server.go:289] --cloud-provider=auto-detect is deprecated. The desired cloud provider sho
Nov 03 16:36:49 steller.india.com kubelet[52511]: error: failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such
Nov 03 16:36:49 steller.india.com systemd[1]: Unit kubelet.service entered failed state.
Nov 03 16:36:49 steller.india.com systemd[1]: kubelet.service failed.
Nov 03 16:37:00 steller.india.com systemd[1]: kubelet.service holdoff time over, scheduling restart.
Nov 03 16:37:00 steller.india.com systemd[1]: Started kubelet: The Kubernetes Node Agent.
Nov 03 16:37:00 steller.india.com systemd[1]: Starting kubelet: The Kubernetes Node Agent...
Nov 03 16:37:00 steller.india.com kubelet[52975]: I1103 16:37:00.134702 52975 feature_gate.go:156] feature gates: map[]
Nov 03 16:37:00 steller.india.com kubelet[52975]: I1103 16:37:00.134763 52975 controller.go:114] kubelet config controller: starting controller
Nov 03 16:37:00 steller.india.com kubelet[52975]: I1103 16:37:00.134767 52975 controller.go:118] kubelet config controller: validating combination of defaults and flag
Nov 03 16:37:00 steller.india.com kubelet[52975]: I1103 16:37:00.141273 52975 client.go:75] Connecting to docker on unix:///var/run/docker.sock
Nov 03 16:37:00 steller.india.com kubelet[52975]: I1103 16:37:00.141364 52975 client.go:95] Start docker client with request timeout=2m0s
Nov 03 16:37:00 steller.india.com kubelet[52975]: W1103 16:37:00.143023 52975 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Nov 03 16:37:00 steller.india.com kubelet[52975]: I1103 16:37:00.149537 52975 feature_gate.go:156] feature gates: map[]
Nov 03 16:37:00 steller.india.com kubelet[52975]: W1103 16:37:00.149780 52975 server.go:289] --cloud-provider=auto-detect is deprecated. The desired cloud provider sho
Nov 03 16:37:00 steller.india.com kubelet[52975]: I1103 16:37:00.179873 52975 certificate_manager.go:361] Requesting new certificate.
Nov 03 16:37:00 steller.india.com kubelet[52975]: E1103 16:37:00.180392 52975 certificate_manager.go:284] Failed while requesting a signed certificate from the master:
Nov 03 16:37:00 steller.india.com kubelet[52975]: I1103 16:37:00.181404 52975 manager.go:149] cAdvisor running in container: "/sys/fs/cgroup/cpu,cpuacct/system.slice/k
Nov 03 16:37:00 steller.india.com kubelet[52975]: W1103 16:37:00.223876 52975 manager.go:157] unable to connect to Rkt api service: rkt: cannot tcp Dial rkt api servic
Nov 03 16:37:00 steller.india.com kubelet[52975]: W1103 16:37:00.224005 52975 manager.go:166] unable to connect to CRI-O api service: Get http://%2Fvar%2Frun%2Fcrio.so
Nov 03 16:37:00 steller.india.com kubelet[52975]: I1103 16:37:00.262573 52975 fs.go:139] Filesystem UUIDs: map[17856e0b-777f-4065-ac97-fb75d7a1e197:/dev/dm-1 2dc6a878-
Nov 03 16:37:00 steller.india.com kubelet[52975]: I1103 16:37:00.262604 52975 fs.go:140] Filesystem partitions: map[/dev/sdb:{mountpoint:/D major:8 minor:16 fsType:xfs
Nov 03 16:37:00 steller.india.com kubelet[52975]: I1103 16:37:00.268969 52975 manager.go:216] Machine: {NumCores:56 CpuFrequency:2600000 MemoryCapacity:540743667712 Hu
Nov 03 16:37:00 steller.india.com kubelet[52975]: 967295 Mtu:1500} {Name:eno49 MacAddress:14:02:ec:82:57:30 Speed:10000 Mtu:1500} {Name:eno50 MacAddress:14:02:ec:82:57:3
Nov 03 16:37:00 steller.india.com kubelet[52975]: evel:1} {Size:262144 Type:Unified Level:2}]} {Id:13 Threads:[12 40] Caches:[{Size:32768 Type:Data Level:1} {Size:32768
Nov 03 16:37:00 steller.india.com kubelet[52975]: s:[26 54] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:
Nov 03 16:37:00 steller.india.com kubelet[52975]: I1103 16:37:00.270145 52975 manager.go:222] Version: {KernelVersion:3.10.0-229.14.1.el7.x86_64 ContainerOsVersion:Cen
Nov 03 16:37:00 steller.india.com kubelet[52975]: I1103 16:37:00.271263 52975 server.go:422] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaultin
Nov 03 16:37:00 steller.india.com kubelet[52975]: error: failed to run Kubelet: Running with swap on is not supported, please disable swap! or set --fail-swap-on flag to
Nov 03 16:37:00 steller.india.com systemd[1]: kubelet.service: main process exited, code=exited, status=1/FAILURE
Nov 03 16:37:00 steller.india.com systemd[1]: Unit kubelet.service entered failed state.
Nov 03 16:37:00 steller.india.com systemd[1]: kubelet.service failed.
Upvotes: 13
Views: 24375
Reputation: 2388
On Ubuntu disable SWAP and make it persistent:
sudo swapoff -a && sudo sed -i '/swap/d' /etc/fstab
Second command will disable SWAP permanently by commenting out SWAP row in file /etc/fstab
Upvotes: 2
Reputation: 22198
I'll guess different users will reach this question for different reasons so I'll add additional explanation here.
The fastest and safest solution is to disable swap - like mentioned in link which @Jossef_Harush shared, this is also recommended by K8S.
But as he also mentioned - some workloads require more in-depth memory management.
If you encounter the mentioned error because swap was enabled by purpose - I would suggest reading about Evicting end-user Pods before considering enabling swap:
If the kubelet is unable to reclaim sufficient resource on the node, kubelet begins evicting Pods.
The kubelet ranks Pods for eviction first by whether or not their usage of the starved resource exceeds requests, then by Priority, and then by the consumption of the starved compute resource relative to the Pods' scheduling requests.
As a result, kubelet ranks and evicts Pods in the following order:
BestEffort or Burstable Pods whose usage of a starved resource exceeds its request. Such pods are ranked by Priority, and then usage above request.
Guaranteed pods and Burstable pods whose usage is beneath requests are evicted last. Guaranteed Pods are guaranteed only when requests and limits are specified for all the containers and they are equal. Such pods are guaranteed to never be evicted because of another Pod's resource consumption. If a system daemon (such as kubelet, docker, and journald) is consuming more resources than were reserved via system-reserved or kube-reserved allocations, and the node only has Guaranteed or Burstable Pods using less than requests remaining, then the node must choose to evict such a Pod in order to preserve node stability and to limit the impact of the unexpected consumption to other Pods. In this case, it will choose to evict pods of Lowest Priority first.
Make sure you're also familiar with:
1 ) The 3 qos classes - Make sure that your high priority workloads are running with the Guaranteed (or at least Burstable) class.
2 ) Pod Priority and Preemption.
Upvotes: 0
Reputation: 34257
Generally speaking, the official requirement is to disable swap.
Reason? not yet supported by kubernetes. IMO they ask you to disable swap to prevent issues with multi-node cluster workload shifting.
However, I have a valid use case - I'm developing an on-prem product, linux distro, included with kubeadm. no horizontal scaling by design. To survive opportunistic memory peaks and still function (but slow), I definitely need swap.
kubeadm
with swap enabled:Create a file in /etc/systemd/system/kubelet.service.d/20-allow-swap.conf
with the content:
[Service]
Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false"
Run
sudo systemctl daemon-reload
Initialize kubeadm
with flag --ignore-preflight-errors=Swap
kubeadm init --ignore-preflight-errors=Swap
Upvotes: 16
Reputation: 1799
sudo swapoff -a is not persistent across reboot.
You may disable swap after reboot by just commenting out (add # in front of the line) the swap entry in /etc/fstab file. It will prevent swap partition from automatically mounting after a reboot.
Steps:
Open the file /etc/fstab
Look for a line below
# swap was on /dev/sda5 during installation
UUID=e7bf7d6e-e90d-4a96-91db-7d5f282f9363 none swap sw 0 0
Comment out above line with # and save it. It should look like below
# swap was on /dev/sda5 during installation
#UUID=e7bf7d6e-e90d-4a96-91db-7d5f282f9363 none swap sw 0 0
Reboot the system or for current session execute "sudo swapoff -a" to avoid reboot.
Upvotes: 3
Reputation: 19133
Looks like you have enabled swap on your server, can you disable it and re-run the init command.
error: failed to run Kubelet: Running with swap on is not supported, please disable swap! or set --fail-swap-on flag to
Here is the steps to disable swap in centos https://www.centos.org/docs/5/html/Deployment_Guide-en-US/s1-swap-removing.html
Upvotes: 1