Reputation: 187
I have installed two nodes kubernetes 1.12.1
in cloud VMs, both behind internet proxy. Each VMs have floating IPs associated to connect over SSH, kube-01
is a master and kube-02
is a node. Executed export:
no_proxy=127.0.0.1,localhost,10.157.255.185,192.168.0.153,kube-02,192.168.0.25,kube-01
before running kubeadm init
, but I am getting the following status for kubectl get nodes
:
NAME STATUS ROLES AGE VERSION
kube-01 NotReady master 89m v1.12.1
kube-02 NotReady <none> 29s v1.12.2
Am I missing any configuration? Do I need to add 192.168.0.153
and 192.168.0.25
in respective VM's /etc/hosts
?
Upvotes: 15
Views: 40934
Reputation: 589
I found that if you delete a node and re-provision and re-join it with the same name, WITHOUT issuing a node delete command, the node will join but report a NotReady state without much else to indicate the problem.
Likely an authentication issue bound to the previous system of the same name.
Either rename the new node or issue kubectl delete node <nodename>
Upvotes: 0
Reputation: 1
Run
journalctl -u kubelet
Then check at node logs, if you get below error, disable the sawp using swapoff -a
"Failed to run kubelet" err="failed to run Kubelet: running with swap on is not supported, please disable swap! or set --fa Main process exited, code=exited, status=1/FAILURE
Upvotes: 0
Reputation: 588
On the off chance it might be the same for someone else, in my case, I was using the wrong AMI image to create the nodegroup.
Upvotes: 0
Reputation: 1428
Looks like pod network is not installed yet on your cluster . You can install weave for example with below command
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
After a few seconds, a Weave Net pod should be running on each Node and any further pods you create will be automatically attached to the Weave network.
You can install pod networks of your choice . Here is a list
after this check
$ kubectl describe nodes
check all is fine like below
Conditions:
Type Status
---- ------
OutOfDisk False
MemoryPressure False
DiskPressure False
Ready True
Capacity:
cpu: 2
memory: 2052588Ki
pods: 110
Allocatable:
cpu: 2
memory: 1950188Ki
pods: 110
next ssh to the pod which is not ready and observe kubelet logs. Most likely errors can be of certificates and authentication.
You can also use journalctl on systemd to check kubelet errors.
$ journalctl -u kubelet
Upvotes: 11
Reputation: 3427
Try with this
Your coredns is in pending state check with the networking plugin you have used and check the proper addons are added
check kubernates troubleshooting guide
https://kubernetes.io/docs/concepts/cluster-administration/addons/
And install the following with those
And check
kubectl get pods -n kube-system
Upvotes: 1