Reputation: 25914
I have some previously run pods that I think were killed by Kubernetes for OOM or DEADLINE EXCEEDED, what's the most reliable way to confirm that? Especially if the pods weren't recent.
Upvotes: 29
Views: 47290
Reputation: 246
If the pods are still showing up when you type kubectl get pods [-A/--all-namespaces]
then you get type the following kubectl describe pod PODNAME
and look at the reason for termination. The output will look similar to the following (I have extracted parts of the output that are relevant to this discussion):
Containers:
somename:
Container ID: docker://5f0d9e4c8e0510189f5f209cb09de27b7b114032cc94db0130a9edca59560c11
Image: ubuntu:latest
...
State: Terminated
Reason: Completed
Exit Code: 0
In the sample output you will, my pod's terminated reason is Completed
but you will see other reasons such as OOMKilled
and others over there.
Upvotes: 13
Reputation: 71
I had the similar problem where my pods were getting killed and deleted immediately. I was rolling the logs, but couldn't find much there. To see events, the pod had got deleted leaving no trace.
What i did was to put a watch on it, every 1 or 2s to catch the event on screen with command -
watch -n2 "kubectl describe pod <pod_name>"
.
Not sure if this helps, but worked for me
Upvotes: 1
Reputation: 3135
If the pod has already been deleted, you can also check kubernetes events and see what's going on:
$ kubectl get events
LAST SEEN FIRST SEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE
59m 59m 1 my-pod-7477dc76c5-p49k4 Pod spec.containers{my-service} Normal Killing kubelet Killing container with id docker://my-service:Need to kill Pod
Upvotes: 9