Reputation: 492
I'm using Docker Swarm to test services on AWS. I recently applied an update to the service like this:
docker service update --image TestImage:v2 --update-parallelism 2 \
--update-delay 10s TestService2
The update worked as intended, and the service updated the task containers to v2. However a quick docker service ps TestService2 | grep "v1"
reveals a bunch of shutdown TestImage:v1
tasks.
a0w77kj0k6jfg4r9g4nz47zzg \_ TestService2.1 TestImage:v1 W1 Shutdown Shutdown 36 minutes ago
2of4mc63ekzbib01w3x7q6sdm \_ TestService2.2 TestImage:v1 W2 Shutdown Shutdown 37 minutes ago
495frrpza5pxt205o1594x54a \_ TestService2.3 TestImage:v1 W1 Shutdown Shutdown 36 minutes ago
57l0gsqd26u2e5gdj30w8mcn9 \_ TestService2.4 TestImage:v1 M1 Shutdown Shutdown 36 minutes ago
baoe1i79fswb34ydwbpafg6tm \_ TestService2.5 TestImage:v1 M3 Shutdown Shutdown 35 minutes ago
3uxi7kwxb73z69km6s17son58 \_ TestService2.6 TestImage:v1 M2 Shutdown Shutdown 37 minutes ago
99cg4arnt1y52nd8d422bdu49 \_ TestService2.7 TestImage:v1 M3 Shutdown Shutdown 36 minutes ago
cq5716jqp40h6jugo1j9ilzwp \_ TestService2.8 TestImage:v1 M1 Shutdown Shutdown 35 minutes ago
awlz1kxbrjk51dey7frm14d8u \_ TestService2.9 TestImage:v1 W3 Shutdown Shutdown 35 minutes ago
4xdi9a1jweyehfqlt76uynf3i \_ TestService2.10 TestImage:v1 M2 Shutdown Shutdown 36 minutes ago
eo4t6i0gaj5i294fcdnb3qncq \_ TestService2.11 TestImage:v1 W3 Shutdown Shutdown 35 minutes ago
3ydeuxdjquulv5xj94b9ovuwu \_ TestService2.12 TestImage:v1 W1 Shutdown Shutdown 36 minutes ago
How can I remove these without going to each individual swarm node and running docker rm
on the exited containers? I don't think theres a docker service
command to do it, I've looked through the docs, but does anyone know of a hack or script that I can run on a Swarm Manager to clean up the nodes?
Thanks!
Upvotes: 26
Views: 25482
Reputation: 63
yi92mgl7z8jb web.2 nginx:latest manager1 Running Running 11 minutes ago
0cmzbd1oxwqr \_ web.2 nginx:latest manager1 Shutdown Failed 11 minutes ago "task: non-zero exit (255)"
moe7hex4qvmg \_ web.2 nginx:latest manager1 Shutdown Shutdown 11 minutes ago
iyxs118uo67d \_ web.2 nginx:latest manager1 Shutdown Shutdown 10 hours ago
v3uxafpxc4d3 \_ web.2 nginx:latest manager1 Shutdown Shutdown 11 minutes ago
6upsy8gvyrsn web.5 nginx:latest manager1 Running Running 11 minutes ago
mlaxkfusunqe \_ web.5 nginx:latest manager1 Shutdown Failed 11 minutes ago "task: non-zero exit (255)"
bh3nkp05yd6r \_ web.5 nginx:latest manager1 Shutdown Shutdown 11 minutes ago
lqedayxq7gr9 \_ web.5 nginx:latest manager1 Shutdown Shutdown 10 hours ago
xryxpfjsrdja \_ web.5 nginx:latest manager1 Shutdown Shutdown 11 minutes ago
I did
docker swarm update --task-history-limit 2
docker-machine stop manager1
docker-machine start manager1
and
PS C:\> docker service ps web | Select-String "manager1"
3ogu1r0y6s6t web.2 nginx:latest manager1 Running Running 3 minutes ago
0cmzbd1oxwqr \_ web.2 nginx:latest manager1 Shutdown Failed 3 minutes ago "task: non-zero exit (255)"
wbxr5hubftfa web.5 nginx:latest manager1 Running Running 3 minutes ago
bh3nkp05yd6r \_ web.5 nginx:latest manager1 Shutdown Shutdown 25 minutes ago
Thank you, Geige V
Upvotes: 4
Reputation: 1469
The containers for those services are removed after a rolling update; you are simply left with a log of those that were shutdown.
You can limit the number you see using
docker swarm update --task-history-limit 5
Upvotes: 31