Reputation: 19099
Based on this (https://kubernetes.io/docs/getting-started-guides/kubeadm/) step I have installed Kubernetes in Centos 7 box and ram the kubeadm init command.
But node is not in ready status. When I looked the /var/log/messages. getting below message.
Apr 30 22:19:38 master kubelet: W0430 22:19:38.226441 2372 cni.go:157] Unable to update cni config: No networks found in /etc/cni/net.d
Apr 30 22:19:38 master kubelet: E0430 22:19:38.226587 2372 kubelet.go:2067] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
My kubelet running with these arguments.
/usr/bin/kubelet --kubeconfig=/etc/kubernetes/kubelet.conf --require-kubeconfig=true --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt --cgroup-driver=systemd
in my server I didn't see /etc/cni/net.d directory. in /opt/cin/bin directory I see these files.
# ls /opt/cni/bin
bridge cnitool dhcp flannel host-local ipvlan loopback macvlan noop ptp tuning
How can I clear this error message?
Upvotes: 6
Views: 39059
Reputation: 942
none of the above solutions didn't worked for me. I found out that my server does not have default route!
# route -n
169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 wlp9s0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
so I added the default gateway by the following command:
# route add default gw 19.168.1.1
# route -n
0.0.0.0 192.168.1.1 0.0.0.0 UG 600 0 0 wlp9s0
169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 wlp9s0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
right now the iptables
works fine
Upvotes: 0
Reputation: 265
if you are in AWS ... I'm using cloud formation yaml's, I suggest that you match your Kubernetes version AMI with the Region+ID:
Kubernetes Version 1.13.8 Region: US East (N. Virginia) (us-east-1) Amazon EKS-optimized AMI: ami-0d3998d69ebe9b214
then apply your mapping:
kubectl apply -f aws-auth-cm.yaml
Then watch the magic:
kubectl get nodes --watch
https://docs.aws.amazon.com/eks/latest/userguide/launch-workers.html
Upvotes: 1
Reputation: 512
I think this problem cause by kuberadm first init coredns but not init flannel,so it throw "network plugin is not ready: cni config uninitialized".
Solution:
1. Install flannel by kubectl -n kube-system apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
2. Reset the coredns pod
kubectl delete coredns-xx-xx
3. Then run kubectl get pods
to see if it works.
if you see this error "cni0" already has an IP address different from 10.244.1.1/24". follow this:
ifconfig cni0 down
brctl delbr cni0
ip link delete flannel.1
if you see this error "Back-off restarting failed container", and you can get the log by
root@master:/home/moonx/yaml# kubectl logs coredns-86c58d9df4-x6m9w -n=kube-system
.:53
2019-01-22T08:19:38.255Z [INFO] CoreDNS-1.2.6
2019-01-22T08:19:38.255Z [INFO] linux/amd64, go1.11.2, 756749c
CoreDNS-1.2.6
linux/amd64, go1.11.2, 756749c
[INFO] plugin/reload: Running configuration MD5 = f65c4821c8a9b7b5eb30fa4fbc167769
[FATAL] plugin/loop: Forwarding loop detected in "." zone. Exiting. See https://coredns.io/plugins/loop#troubleshooting. Probe query: "HINFO 1599094102175870692.6819166615156126341.".
Then you can see the file "/etc/resolv.conf" on the failed node, if the nameserver is localhost there will be a loopback.Change to:
#nameserver 127.0.1.1
nameserver 8.8.8.8
Upvotes: 1
Reputation: 106
Looks like you've chosen flannel as CNI-networking. Pls check if you've specified --pod-network-cidr 10.244.0.0/16 while kubeadm init.
Also check if you've ConfigMaps created for flannel as in here @ https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml
Upvotes: 6