rjbathgate
rjbathgate

Reputation: 319

Updating a docker container from image; leaves old images on server

My process for updating a docker image to production (a docker swarm) is as follows:

On dev environment:

docker-compose build

docker push myrepo/name

Then on the prod server, which is a docker swarm:

docker pull myrepo/name

docker service update --image myrepo/name --with-registry-auth containername

This works perfectly; the swarm is updated with the latest image.

However, it always leaves the old image on the live servers and I'm left with something like this:

docker image ls

REPOSITORY                   TAG         IMAGE ID       CREATED          SIZE
myrepo/name                  latest      abcdef         14 minutes ago   1.15GB
myrepo/name                  <none>      bcdefg         4 days ago       1.22GB
myrepo/name                  <none>      cdefgh         6 days ago       1.22GB

Which, over time results in a heap of disk space being unnecessarily used.

I've read that docker system prune is not safe to run on production especially in a swarm.

So, I am having to regularly, manually remove old images e.g.

docker image rm bcdefg cdefgh

Am I missing a step in my update process, or is it 'normal' that old images are left over to be manually removed?

Thanks in advance

Upvotes: 1

Views: 845

Answers (1)

matic1123
matic1123

Reputation: 1119

since you are using docker swarm and probably multi node setup you could deploy a global service which would do the cleanup for you. We are using Bret Fisher's approach on it:

    version: '3.9'

    services:
      image-prune:
        image: internal-image-registry.org/proxy-cache/library/docker:20.10
        command: sh -c "while true; do docker image prune -af --filter \"until=4h\"; sleep 14400; done"
        networks:
          - bridge
        volumes:
          - /var/run/docker.sock:/var/run/docker.sock
        deploy:
          mode: global
        labels:
          - "env=devops"
          - "application=cleanup-image-prune"

    networks:
      bridge:
        external: true
        name: bridge

When adding new hosts it gets deployed automatically on it with our own base docker image and then does the cleanup job for us.

We are still missing some time to inspect newer docker service types which are scheduled on their own. It would probably be wise to move cleanup jobs to the global service replicated jobs provided by docker instead of an infinite loop in a script. It just works for us so we did not make it high priority enough to swap over to it. More info on the replicated jobs

Upvotes: 1

Related Questions