Vojtěch
Vojtěch

Reputation: 12416

Deleting deployment leaves trailing replicasets and pods

I am running Kubernetes in GCP and since updating few months ago (now I am running 1.17.13-gke.2600) I am observing trailing replicasets and pods after deployment deletion. Consider state before deletion:

$ k get deployment | grep parser
parser-devel                              1/1     1       1        38d
$ k get replicaset | grep parser
parser-devel-66bfc86ddb                   0       0       0        27m
parser-devel-77898d9b9d                   1       1       1      5m49s
$ k get pod | grep parser
parser-devel-77898d9b9d-4w48w             1/1     Running 0       6m2s

Then I delete the deployment:

$ k delete deployment parser-devel
deployment.apps "parser-devel" deleted
$ k get replicaset | grep parser
parser-devel-66bfc86ddb                   0       0       0        28m
parser-devel-77898d9b9d                   1       1       1       7m1s
$ k get pod | grep parser
parser-devel-77898d9b9d-4w48w             1/1     Running 0       7m6s

Then I try to delete the replicasets:

$ k delete replicaset parser-devel-66bfc86ddb parser-devel-77898d9b9d
replicaset.apps "parser-devel-66bfc86ddb" deleted
replicaset.apps "parser-devel-77898d9b9d" deleted
$ k get pod | grep parser
parser-devel-77898d9b9d-4w48w             1/1     Running 0      8m14s

As far as I understand Kubernetes, this is not a correct behaviour, so why it is happening?

Upvotes: 3

Views: 756

Answers (2)

Jose Luis Delgadillo
Jose Luis Delgadillo

Reputation: 2448

The trailing ReplicaSets that you can see after deployment deletion depends of the Revision History Limit that you have in your Deployment.

.spec.revisionHistoryLimit is an optional field that specifies the number of old ReplicaSets to retain to allow rollback. By default, 10 old ReplicaSets will be kept.

You could see the number of ReplicaSets with the following command:

kubectl get  deployment DEPLOYMENT -o yaml | grep revisionHistoryLimit

But you can modify this value with:

kubectl edit deployment DEPLOYMENT

Edit 1

I created a GKE cluster on the same version (1.17.13-gke.2600) in order to know if it is deleting trailing resources when I delete the parent object (Deployment).

With testing purposes, I created an nginx Deployment and then deleted it with kubectl delete Deployment DEPLOYMENT_NAME, Deployment and all its dependents (Pods created and Replicasets) were deleted.

Then I tested it again, but this time by adding kubectl flag --cascade=false like kubectl delete Deployment DEPLOYMENT_NAME --cascade=false and all the dependent resources remained but the Deployment. With this situation (leaving orphaned resources) kube-controller manager (Specifically garbage collector) should delete these resources soon or later.

With the tests I made, seems that GKE version is OK as I was able to delete the trailed resources made by the Deployment object I created since my first test.

Cascade option is set by default as true with different and several command verbs like delete, you also could check this other documentation. Even so, I would like to know if you can create a Deployment, and then try to delete it with command kubectl delete Deployment DEPLOYMENT_NAME --cascade=true in order to know if by trying to force cascade deletion helps on this case.

Upvotes: 0

Daein Park
Daein Park

Reputation: 4683

How about check ownerReference of the ReplicaSet created by you Deployment ? Refer Owners and dependents for more details. For example, for removing dependencies of the Deployment, the Deployment name and uid should be matched exactly in the ownerReference ones. Or I have experience that similar issue happened, if Kuerbetenes API was somthing wrong. So API service restart may help to resovle it.

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  ...
  ownerReferences:
  - apiVersion: apps/v1
    controller: true
    blockOwnerDeletion: true
    kind: Deployment
    name: your-deployment
    uid: xxx

Upvotes: 1

Related Questions