Reputation: 343
I have a k8s cluster with three masters, two workers, and an external haproxy and use flannel as a cni. The coredns have problems, and their status is running, but they don't become ready.
Coredns log
I get the logs of this pod, and I get this message:
[INFO] plugin/ready: Still waiting on: "Kubernetes."
What I do to solve this problem but didn't get any result:
1- check ufw and disable it.
2- check IPtables and flush them.
3- check Kube-proxy logs.
4- check the haproxy, and it is accessible from out and all servers in the cluster.
5- check nodes network.
7- reboot all servers at the end. :))
I get describe po :
Upvotes: 2
Views: 4606
Reputation: 1461
IP:PORT
& Service-name:PORT
kubectl run -it --rm test-nginx-svc --image=nginx -- bash
curl http://<SERVICE-IP>:8080
curl http://nginx-service:8080
If you couldn't curl your service via Service-name:PORT
then you probably have a DNS Issue....
Service Name Resolution Problems?
kubectl run -it test-nginx-svc --image=nginx -- bash
cat /etc/resolv.conf
nameserver 10.96.0.10 # IP address of CoreDNS
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
I suggest to try re-install it via official docs or helm chart
OR
Try onther CNIs like weave
Upvotes: 1
Reputation: 343
I solve the problem. It didn't relate to coredns. It was about the IPtables rule. Some rules for Kubernetes svc did not create. Now with the rules which are in the picture, everything is okay:
Upvotes: 0