user11081980
user11081980

Reputation: 3289

Orphan replicasets when running "kubectl apply" with a new image tag

My deployment yml file tags my image with the build version.

So everytime I run kubectl apply from my release pipeline, it pulls the image and deploys it properly.

My question is about the replicaset: when I run kubectl get all, I see orphan replicasets from the pods that were terminated from the previous images. (At least, that's my understanding.) The desired, current and ready properties of these orphan replicasets are 0.

Will this lead to some sort of memory leak? Should I run any other command before kubectl apply?

Upvotes: 0

Views: 363

Answers (1)

hoque
hoque

Reputation: 6471

When you upgrade your deployments from version 1 to version 2, the Deployment creates a new ReplicaSet and increases the count of replicas while the previous count goes to 0. Details here

If you try to execute another rolling update from version 2 to version 3, you might notice that at the end of the upgrade, you have two ReplicaSets with a count of 0.

How this benefit us?

Imagine that current version of the pod introduces any problem and you might want to rollback to previous version. If you have old ReplicaSet, you could scale current to 0 and increment the old ReplicaSet count. See how Rolling Back to a Previous Revision.

By default Kubernetes stores the last 10 ReplicaSets and lets you rollback to any of them. But you can change this by changing the spec.revisionHistoryLimit in your Deployment. Ref: Clean up Policy

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-deployment
spec:
  replicas: 1
  revisionHistoryLimit: 3
...

Upvotes: 4

Related Questions