abergmeier
abergmeier

Reputation: 14052

Idiomatic way to wait for Deployment to be really scaled down

For tests, I want to have a way of waiting until a Deployment (replicas: 0) is really gone.

It seems like the Deployment Status is not waiting for all Pods to actually be terminated.

So I am wondering what the idiomatic way of waiting for a Deployment to be terminated is. In other words, I want to have synchronous Delete of a Resource in Kubernetes, which also only returns after all Owning Resources (recursively) got deleted.

Upvotes: 2

Views: 6233

Answers (2)

Crou
Crou

Reputation: 11418

Your pods ( dependents) have metadata.ownerReferences pointing to your Deployment (owner) and by design they should be removed first from Kubernetes 1.8. In docs about owners and dependents it says:

Sometimes, Kubernetes sets the value of ownerReference automatically. For example, when you create a ReplicaSet, Kubernetes automatically sets the ownerReference field of each Pod in the ReplicaSet. In 1.8, Kubernetes automatically sets the value of ownerReference for objects created or adopted by ReplicationController, ReplicaSet, StatefulSet, DaemonSet, Deployment, Job and CronJob.

You can check for that value by using kubectl get pods --output=yaml

Controlling how the garbage collector deletes dependents says:

When you delete an object, you can specify whether the object’s dependents are also deleted automatically. Deleting dependents automatically is called cascading deletion. There are two modes of cascading deletion: background and foreground.

If you delete an object without deleting its dependents automatically, the dependents are said to be orphaned.

Foreground object enters "deletion in progress" state, in that state following are true:

  • The object is still visible via the REST API
  • The object’s deletionTimestamp is set
  • The object’s metadata.finalizers contains the value “foregroundDeletion”.

Once the garbage collector removes all “blocking” dependents (objects with ownerReference.blockOwnerDeletion=true), it deletes the owner object.

Note that in the “foregroundDeletion”, only dependents with ownerReference.blockOwnerDeletion=true block the deletion of the owner object. Kubernetes version 1.7 added an admission controller that controls user access to set blockOwnerDeletion to true based on delete permissions on the owner object, so that unauthorized dependents cannot delay deletion of an owner object.

If an object’s ownerReferences field is set by a controller (such as Deployment or ReplicaSet), blockOwnerDeletion is set automatically and you do not need to manually modify this field.

In Background cascading deletion Kubernetes deletes the owner immediately and then garbage collector deletes the dependents in the background.

As for kubectl wait it depends what resource.group you will set it to, because it does not care about any dependent resources but only observed resource.group.

Upvotes: 2

Eduardo Baitello
Eduardo Baitello

Reputation: 11346

I think that Kubernetes 1.11+ already waits for the deletion to be completed before delete returns:

kubectl delete --help | grep '\-\-wait'
--wait=true: If true, wait for resources to be gone before returning. This waits for finalizers.

Even so, you can use kubectl wait to wait for a resource deletion:

Wait for a specific condition on one or many resources. Alternatively, the command can wait for the given set of resources to be deleted by providing the "delete" keyword as the value to the --for flag.

e.g: kubectl wait deployment/my-deployment --for=delete

Upvotes: 3

Related Questions