reachlin
reachlin

Reputation: 4782

What will happen to evicted pods in kubernetes?

I just saw some of my pods got evicted by kubernetes. What will happen to them? just hanging around like that or I have to delete them manually?

Upvotes: 182

Views: 195583

Answers (19)

S. Petrov
S. Petrov

Reputation: 11

To contribute to all people struggling with 12.5k evicted pods. You can somehow easily remove all evicted pods with sanitize me pods! k9s function.

Even if you dont use k9s, I would suggest installing it to use the function. It will automatically remove all error, evicted etc... status pods.

But depends if the cluster can handle it, in somecase

Upvotes: 1

Mohammed-5253
Mohammed-5253

Reputation: 41

In my case I had too many pods with evicted and completed status and I have used this 2 commands to delete all of them.

kubectl get pods --all-namespaces -o json | jq '.items[] | select(.status.reason!=null) | select(.status.reason | contains("Evicted")) | "kubectl delete pods \(.metadata.name) -n \(.metadata.namespace)"' | xargs -n 1 bash -c 
kubectl delete pods --field-selector=status.phase==Succeeded --all-namespaces

Upvotes: 0

Tobias Bergkvist
Tobias Bergkvist

Reputation: 2444

I found this to be the fastest way to delete evicted pods

kubectl delete pod -A --field-selector 'status.phase==Failed'

(Only matters when you have A LOT of them accumulated)

Upvotes: 4

albin.varghese
albin.varghese

Reputation: 59

When we have too many evicted pods in our cluster, this can lead to network load as each pod, even though it is evicted is connected to the network and in case of a cloud Kubernetes cluster, will have blocked an IP address, which can lead to exhaustion of IP addresses too if you have a fixed pool of IP addresses for your cluster.

Also, when we have too many pods in Evicted status, it becomes difficult to monitor the pods by running the kubectl get pod command as you will see too many evicted pods, which can be a bit confusing at times.

To delete and evicted pod run the following command

kubectl delete pod <podname> -n <namespace>

what if you have many evicted pods

kubectl get pod -n <namespace> | grep Evicted | awk '{print $1}' | xargs kubectl delete pod -n <namespace>

Upvotes: 0

victorm1710
victorm1710

Reputation: 1513

To answer the original question: the evicted pods will hang around until the number of them reaches the terminated-pod-gc-threshold limit (it's an option of kube-controller-manager and is equal to 12500 by default), it's by design behavior of Kubernetes (also the same approach is used and documented for Jobs - https://kubernetes.io/docs/concepts/workloads/controllers/job/#job-termination-and-cleanup). Keeping the evicted pods pods around allows you to view the logs of those pods to check for errors, warnings, or other diagnostic output.

Upvotes: 45

davidxxx
davidxxx

Reputation: 131324

Another way still with awk.

To prevent any human error that could make me crazy (deleting desirable pods), I check before the result of the get pods command :

kubectl -n my-ns get pods --no-headers --field-selector=status.phase=Failed     

If that looks good, here we go :

kubectl -n my-ns get pods --no-headers --field-selector=status.phase=Failed | \
awk '{system("kubectl -n my-ns delete pods " $1)}'

Same thing with pods of all namespaces.

Check :

kubectl get -A pods --no-headers --field-selector=status.phase=Failed     

Delete :

kubectl get -A pods --no-headers --field-selector status.phase=Failed | \
awk '{system("kubectl -n " $1 " delete pod " $2 )}'

Upvotes: 6

Marcelo Aguiar
Marcelo Aguiar

Reputation: 181

The bellow command delete all failed pods from all namespaces

kubectl get pods -A | grep Evicted | awk '{print $2 " -n " $1}' | xargs -n 3 kubectl delete pod

Upvotes: 17

Weike
Weike

Reputation: 1270

To delete all the Evicted pods by force, you can try this one-line command:

$ kubectl get pod -A | sed -nr '/Evicted/s/(^\S+)\s+(\S+).*/kubectl -n \1 delete pod \2 --force --grace-period=0/e'

Tips: use the p modifier of s command of sed instead of e will just print the real command to do the deletion job:

$ kubectl get pod -A | sed -nr '/Evicted/s/(^\S+)\s+(\S+).*/kubectl -n \1 delete pod \2 --force --grace-period=0/p'

Upvotes: 2

bhavin
bhavin

Reputation: 120

below command will get all evicted pods from the default namespace and delete them

kubectl get pods | grep Evicted | awk '{print$1}' | xargs -I {} kubectl delete pods/{}

Upvotes: 1

Roman Marusyk
Roman Marusyk

Reputation: 24569

One more bash command to delete evicted pods

kubectl get pods | grep Evicted | awk '{print $1}' | xargs kubectl delete pod

Upvotes: 14

LucasPC
LucasPC

Reputation: 653

Just in the case someone wants to automatically delete all evicted pods for all namespaces:

  • Powershell
    Foreach( $x in (kubectl get po --all-namespaces --field-selector=status.phase=Failed --no-headers -o custom-columns=:metadata.name)) {kubectl delete po $x --all-namespaces }
  • Bash
kubectl get po --all-namespaces --field-selector=status.phase=Failed --no-headers -o custom-columns=:metadata.name | xargs kubectl delete po --all-namespaces

Upvotes: 12

Steveno
Steveno

Reputation: 181

Kube-controller-manager exists by default with a working K8s installation. It appears that the default is a max of 12500 terminated pods before GC kicks in.

Directly from the K8s documentation: https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/#kube-controller-manager

--terminated-pod-gc-threshold int32     Default: 12500
Number of terminated pods that can exist before the terminated pod garbage collector starts deleting terminated pods. If <= 0, the terminated pod garbage collector is disabled.

Upvotes: 11

mefix
mefix

Reputation: 81

In case you have pods with a Completed status that you want to keep around:

kubectl get pods --all-namespaces --field-selector 'status.phase==Failed' -o json | kubectl delete -f -

Upvotes: 8

Kalvin
Kalvin

Reputation: 1452

A quick workaround I use, is to delete all evicted pods manually after an incident. You can use this command:

kubectl get pods --all-namespaces -o json | jq '.items[] | select(.status.reason!=null) | select(.status.reason | contains("Evicted")) | "kubectl delete pods \(.metadata.name) -n \(.metadata.namespace)"' | xargs -n 1 bash -c

Upvotes: 137

ticapix
ticapix

Reputation: 1752

To delete pods in Failed state in namespace default

kubectl -n default delete pods --field-selector=status.phase=Failed

Upvotes: 133

Hansika Weerasena
Hansika Weerasena

Reputation: 3364

Evicted pods should be manually deleted. You can use following command to delete all pods in Error state.

kubectl get pods --all-namespaces --field-selector 'status.phase==Failed' -o json | kubectl delete -f -

Upvotes: 40

tikael
tikael

Reputation: 449

Here is the 'official' guide for how to hard code the threshold(if you do not want to see too many evicted pods): kube-controll-manager

But a known problem is how to have kube-controll-manager installed...

Upvotes: 0

ffghfgh
ffghfgh

Reputation: 294

OpenShift equivalent of Kalvin's command to delete all 'Evicted' pods:

eval "$(oc get pods --all-namespaces -o json | jq -r '.items[] | select(.status.phase == "Failed" and .status.reason == "Evicted") | "oc delete pod --namespace " + .metadata.namespace + " " + .metadata.name')"

Upvotes: 3

Simon Tesar
Simon Tesar

Reputation: 1833

Depending on if a soft or hard eviction threshold that has been met, the Containers in the Pod will be terminated with or without grace period, the PodPhase will be marked as Failed and the Pod deleted. If your Application runs as part of e.g. a Deployment, there will be another Pod created and scheduled by Kubernetes - probably on another Node not exceeding its eviction thresholds.

Be aware that eviction does not necessarily have to be caused by thresholds but can also be invoked via kubectl drain to empty a node or manually via the Kubernetes API.

Upvotes: 32

Related Questions