Reputation: 8894
K8s 1.5.4. I have a kubernetes cluster where the dashboard quit working. It was working for weeks but now does not display.
When I run kubectl proxy
and navigate to
http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
I get the following:
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "no endpoints available for service \"kubernetes-dashboard\"",
"reason": "ServiceUnavailable",
"code": 503
}
When I run kubectl get pods --namespace=kube-system
:
NAME READY STATUS RESTARTS AGE
heapster-v1.2.0-4088228293-j8gn5 2/2 Running 0 19d
kube-apiserver-96.118.29.200 1/1 Running 1 6d
kube-controller-manager-96.118.29.200 1/1 Running 29 20d
kube-dns-782804071-92kq8 4/4 Running 0 20d
kube-dns-autoscaler-2715466192-90jhm 1/1 Running 0 20d
kube-proxy-96.118.29.200 1/1 Running 22 20d
kube-proxy-96.118.29.213 1/1 Running 0 20d
kube-proxy-96.118.29.214 1/1 Running 0 20d
kube-proxy-96.118.29.217 1/1 Running 1 19d
kube-scheduler-96.118.29.200 1/1 Running 30 20d
kubernetes-dashboard-3543765157-7j40w 1/1 Running 0 20d
Running kubectl logs kubernetes-dashboard-3543765157-7j40w --namespace=kube-system
hangs
Any ideas on how to diagnose?
Upvotes: 0
Views: 316
Reputation: 2298
Check on which node this pod scheduled by running this:
kubectl get pods -n kube-system -o wide
Next, ssh to that node and check hdd space is available: df -h
Also take a look here kubectl describe pod -n kube-system kubernetes-dashboard-3543765157-7j40w
And lastly try to delete the pod: kubectl delete pod -n kube-system kubernetes-dashboard-3543765157-7j40w
The scheduler will automatically recreate it and you'll be able to check logs for that new pod.
Upvotes: 1