Reputation:
I am using Deployments to control my pods in my K8S cluster.
My original deployment file looks like :
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: websocket-backend-deployment
spec:
replicas: 2
selector:
matchLabels:
name: websocket-backend
template:
metadata:
labels:
name: websocket-backend
spec:
containers:
- name: websocket-backend
image: armdock.se/proj/websocket_backend:3.1.4
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
livenessProbe:
httpGet:
port: 8080
path: /websocket/health
initialDelaySeconds: 300
timeoutSeconds: 30
readinessProbe:
httpGet:
port: 8080
path: /websocket/health
initialDelaySeconds: 25
timeoutSeconds: 5
This config is working as planned.
# kubectl get po | grep websocket
websocket-backend-deployment-4243571618-mreef 1/1 Running 0 31s
websocket-backend-deployment-4243571618-qjo6q 1/1 Running 0 31s
Now I plan to do a live/rolling update on the image file. The command that I am using is :
kubectl set image deployment websocket-backend-deployment websocket-backend=armdock.se/proj/websocket_backend:3.1.5
I am only updating the docker image tag. Now im expecting for my pods to remain 2 after the update. I am getting the 2 new pods with the new version but there is one pod that still exists carrying the old version.
# kubectl get po | grep websocket
websocket-backend-deployment-4243571618-qjo6q 1/1 Running 0 2m
websocket-backend-deployment-93242275-kgcmw 1/1 Running 0 51s
websocket-backend-deployment-93242275-kwmen 1/1 Running 0 51s
As you can see, 1 pod uses the old tag 3.1.4
# kubectl describe po websocket-backend-deployment-4243571618-qjo6q | grep Image:
Image: armdock.se/proj/websocket_backend:3.1.4
The rest of the 2 nodes are on the new tag 3.1.5
.
# kubectl describe po websocket-backend-deployment-93242275-kgcmw | grep Image:
Image: armdock.se/proj/websocket_backend:3.1.5
# kubectl describe po websocket-backend-deployment-93242275-kwmen | grep Image:
Image: armdock.se/proj/websocket_backend:3.1.5
Why does 1 old pod still stay there and doesnt get deleted ? Am I missing some config ?
When I check the rollout
command, its just stuck on :
# kubectl rollout status deployment/websocket-backend-deployment
Waiting for rollout to finish: 1 old replicas are pending termination...
My K8S version is :
# kubectl --version
Kubernetes v1.5.2
Upvotes: 7
Views: 18370
Reputation: 672
Maybe k8s can't distinguish the images and treat them like are different. Check if you are fast-forwarding your commits or if the hash of the last commit in the branch which are you deploying is different from the last hash of the commit you actually did
Upvotes: 0
Reputation: 566
I would suggest you to set the maxSurge to 0 in the RollingUpdate strategy to make the desired pods same after the rollout . The maxSurge parameter is the maximum number of pods that can be scheduled above the original number of pods.
Example:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: webserver
spec:
replicas: 2
selector:
matchLabels:
name: webserver
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
template:
metadata:
labels:
name: webserver
spec:
containers:
- name: webserver
image: nginx:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
Upvotes: 5