Reputation: 218
I have a Kubernetes cluster, in which different pods are running in different namespaces. How do I know if any pod failed?
Is there any single command to check the failed pod list or restated pod list?
And reason for the restart(logs)?
Upvotes: 4
Views: 24882
Reputation: 136
Most of the times, the reason for app failure is printed in the lasting logs of the previous pod. You can see them by simply putting --previous
flag along with your kubectl logs ...
cmd.
Upvotes: 0
Reputation: 14084
Depends if you want to have detailed information or you just want to check a few last failed pods.
I would recommend you to read about Logging Architecture.
In case you would like to have this detailed information you should use 3rd party software, as its described in Kubernetes Documentation - Logging Using Elasticsearch and Kibana or another one FluentD.
If you are using Cloud environment you can use Integrated with Cloud Logging
tools (i.e. in Google Cloud Platform
you can use Stackdriver
).
In case you want to check logs to find reason why pod failed, it's good described in K8s docs Debug Running Pods.
If you want to get logs from specific pod
$ kubectl logs ${POD_NAME} -n {NAMESPACE}
First, look at the logs of the affected container:
$ kubectl logs ${POD_NAME} ${CONTAINER_NAME}
If your container has previously crashed, you can access the previous container's crash log with:
$ kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}
Additional information you can obtain using
$ kubectl get events -o wide --all-namespaces | grep <your condition>
Similar question was posted in this SO thread, you can check if for more details.
Upvotes: 5
Reputation: 209
This'll work: kubectl get pods --all-namespaces | | grep -Ev '([0-9]+)/\1'
Also, Lens is pretty good in these situations.
Upvotes: -1