Maverik
Maverik

Reputation: 29

Kubernetes dashboard not working

I tried to implement the kubernetes dashboard but all the time it is in CrashLoopBackoff status and doesn't matter how many times I deploy it it has the same faith

host-xxx:~ # kubectl get pods --all-namespaces                                                                                                                                                                                          
NAMESPACE     NAME                                    READY     STATUS             RESTARTS   AGE
default       locust-master-pr59t                     1/1       Running            0          2d
default       my-nginx-2565190728-8z0eh               1/1       Running            0          2d
default       my-nginx-2565190728-if4my               1/1       Running            0          2d
kube-system   kubernetes-dashboard-1975554030-80rxv   0/1       CrashLoopBackOff   249        21h

I have the yaml file from the "kubectl create -f https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard-head.yaml"

host-xxx:~ # kubectl describe pod --namespace=kube-system kubernetes-dashboard-1975554030-80rxv
Name:           kubernetes-dashboard-1975554030-80rxv
Namespace:      kube-system
Node:           host-44-11-1-25/44.11.1.25
Start Time:     Wed, 21 Dec 2016 14:49:48 +0000
Labels:         app=kubernetes-dashboard
                pod-template-hash=1975554030
Status:         Running
IP:             172.20.140.2
Controllers:    ReplicaSet/kubernetes-dashboard-1975554030
Containers:
  kubernetes-dashboard:
    Container ID:               docker://708aac5cebdff057b69cec94e582cb45f7dba424c336fb320dd0d5e3243fc323
    Image:                      gcr.io/google_containers/kubernetes-dashboard-amd64:v1.5.0
    Image ID:                   docker://sha256:e5133bac8024ac6c916f16df8790259b5504a800766bee87dcf90ec7d634a418
    Port:                       9090/TCP
    State:                      Waiting
      Reason:                   CrashLoopBackOff
    Last State:                 Terminated
      Reason:                   Error
      Exit Code:                1
      Started:                  Thu, 22 Dec 2016 11:32:12 +0000
      Finished:                 Thu, 22 Dec 2016 11:32:13 +0000
    Ready:                      False
    Restart Count:              244
    Liveness:                   http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
    Environment Variables:      <none>
Conditions:
  Type          Status
  Initialized   True 
  Ready         False 
  PodScheduled  True 
No volumes.
QoS Tier:       BestEffort
Events:
  FirstSeen     LastSeen        Count   From                            SubobjectPath                           Type            Reason                  Message
  ---------     --------        -----   ----                            -------------                           --------        ------                  -------
  20h           3m              245     {kubelet host-44-11-1-25}       spec.containers{kubernetes-dashboard}   Normal          Pulling                 pulling image "gcr.io/google_containers/kubernetes-dashboard-amd64:v1.5.0"
  20h           3m              246     {kubelet host-44-11-1-25}                                               Warning         MissingClusterDNS       kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy.
  20h           3m              245     {kubelet host-44-11-1-25}       spec.containers{kubernetes-dashboard}   Normal          Pulled                  Successfully pulled image "gcr.io/google_containers/kubernetes-dashboard-amd64:v1.5.0"
  20h           3m              236     {kubelet host-44-11-1-25}       spec.containers{kubernetes-dashboard}   Normal          Created                 (events with common reason combined)
  20h           3m              236     {kubelet host-44-11-1-25}       spec.containers{kubernetes-dashboard}   Normal          Started                 (events with common reason combined)
  20h           3s              5940    {kubelet host-44-11-1-25}       spec.containers{kubernetes-dashboard}   Warning         BackOff                 Back-off restarting failed docker container
  20h           3s              5906    {kubelet host-44-11-1-25}                                               Warning         FailedSync              Error syncing pod, skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-1975554030-80rxv_kube-system(b893d9c4-c78c-11e6-bd87-fa163e39bb70)"

I am nto sure why I have this error message, on the minion it looks like the image is pulled and I can pull it with docker..

Do you guys have a theory on this ?

Upvotes: 0

Views: 3172

Answers (2)

Zhao Jian
Zhao Jian

Reputation: 212

Just like the early answer said ,

You could use command kubectl logs --namespace=kube-system kubernetes-dashboard-1975554030-80rxv to check error details about what happened.

For errors kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy.

Maybe you did not enable kubedns component yet, which may cause this problem.

If you are still using https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard-head.yaml to create your kubernetes dashboard, you could run command kubectl edit pod kubernetes-dashboard-1975554030-80rxv -n kube-system , to manually change DNS policy to default. The key should be spec.template.spec.dnsPolicy to Default then recreate this Pod to see if it will work .

More details could be found in http://blog.kubernetes.io/2017/04/configuring-private-dns-zones-upstream-nameservers-kubernetes.html

BTW , you could also use official dashboard to create https://github.com/kubernetes/dashboard#getting-started

Upvotes: 0

Sandro Mello
Sandro Mello

Reputation: 11

The image pulled successfully and the container is failing to start in the POD, on this stage you could check it out the problem verifying the logs of the container:

kubectl logs --namespace=kube-system kubernetes-dashboard-1975554030-80rxv

The kubelet issued a warning:

kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy.

Maybe the problem is related to DNS, I had a custom installation where the dashboard container only starts when the DNS is active, verify the logs to understand why it's failing.

Upvotes: 1

Related Questions