Reputation: 6743
In OKD 4.x, when using DeploymentConfig
s and their superset - Template
s to deploy apps, after a Pod
named <POD>
has changed status to Running
a auxiliary pod "deployer" named <POD>-deploy
lingers on forever with the Completed
status. This was not the case in OCP 3.11 and OKD 3.11, when these pods were auto-deleted. How to reproduce that functionality in OKD 4.x?
This is how it works in OCP 3.11 - notice what is happening at the end with the *-deploy
pod:
NAME READY STATUS RESTARTS AGE
ml-admin-vsc-node2-1-mtjz5 1/1 Running 0 7s
ml-admin-vsc-node2-1-tdr4g 0/1 Terminating 1 8d
ml-admin-vsc-node2-1-deploy 0/1 Completed 0 13s
ml-admin-vsc-node2-1-deploy 0/1 Terminating 0 13s
The reasons to try to reproduce this OCP (Openshift) / OKD 3.11 behavior are many. The non-running pods share the best part of their name with the running ones, and thus need to be filtered out from pods lists when using CLI (oc
=kubectl
), count towards various limits, such as the 250 pods-per-node limit and the IPs addresses pool (because the IPs stay allocated to these non-running pods, which is possibly even a bug), and negatively affect cluster performance (e.g. by slowing down metadata processing in etcd
).
Upvotes: 2
Views: 49