Marvin
Marvin

Reputation: 1706

Unexpected extra container created when deploying a service to a swarm

I observe an odd behavior of swarm when I create a service with docker in swarm mode.

basically, I create a service from a private registry, with a binding mount :

docker service create --mount type=bind,src=/some/shared/filesystem/mod_tile,dst=/osm/mod_tile,ro --name="mod_tile" --publish 8082:80 --replicas 3 --with-registry-auth  my-registry:5050/repo1/mod_tile

This goes well... and my services are replicated the way I expected...

But When I perform a docker ps on the manager, I see my expected container, as well as an unexpected second container, running from the same image, with a different name :

CONTAINER ID        IMAGE                                                                    COMMAND                  CREATED              STATUS                PORTS                                            NAMES
ca33d        my-registry:5050/mod_tile:latest                                   "apachectl -D FOREGRâ¦"   About a minute ago   Up About a minute                                                      vigilant_kare.1.fn5u
619e7        my-registry:5050/mod_tile:latest                                   "apachectl -D FOREGRâ¦"   3 minutes ago        Up 3 minutes                                                           mod_tile.3.dyismrc
4f1ebf       demo/demo-tomcat:0.0.1                                                   "./entrypoint.sh"        7 days ago           Up 7 days (healthy)   9900/tcp, 0.0.0.0:8083->8080/tcp                 tomcatgeoserver
d3adf        some.repo:5000/manomarks/visualizer:latest   "npm start"              8 days ago           Up 8 days             8080/tcp                                         supervision_visualizer.1.ok27kbz
673c1        some.repo:5000/grafana/grafana:latest        "/run.sh"                8 days ago           Up 8 days             3000/tcp                                         supervision_grafana.1.pgqko8       some.repo:5000/portainer:latest              "/portainer --externâ¦"   8 days ago           Up 8 days             9000/tcp                                         supervision_portainer.1.vi90w6
bd9b1       some.repo:5000/prom/prometheus:latest        "/bin/prometheus -coâ¦"   8 days ago           Up 8 days             9090/tcp                                         supervision_prometheus.1.j4gyn02
d8a8b        some.repo:5000/cadvisor:0.25.0               "/usr/bin/cadvisor -â¦"   8 days ago           Up 8 days             8080/tcp                                         supervision_cadvisor.om7km
bd46d       some.repo:5000/prom/node-exporter:latest     "/bin/node_exporter â¦"   8 days ago           Up 8 days             9100/tcp                                         supervision_nodeexporter.om7kmd
04b53        some.repo:5000/sonatype/nexus3               "sh -c ${SONATYPE_DIâ¦"   9 days ago           Up 2 hours            0.0.0.0:5050->5050/tcp, 0.0.0.0:8081->8081/tcp   nexus_registry

At first, I thought it was a remaining container from previous attempts, so I stoped it... but a few seconds later, it was up again! No matter how many time I stop it, it will be restarted.

So, I guess it is there on purpose... but I don't understand : I already have my 3 replicas running (I checked on all nodes), and even though I promote another node, the extra container appears only on the leader...

This may come from one of my other containers (used for supervision), but so far, I couldn't figure out from which one...

Does any one have an idea why this extra container is created?

EDIT 05/07

Here are the result of a docker ps on the mod_tile service. The 3 replicas are here, one one each node. The extra service is not considered by the "ps" command.

ID                  NAME                IMAGE                                    NODE                DESIRED STATE       CURRENT STATE          ERROR               PORTS
c77gc        mod_tile.1          my-registry:5050/mod_tile:latest   VM3           Running             Running 15 hours ago
u7465        mod_tile.2          my-registry:5050/mod_tile:latest   VM4           Running             Running 15 hours ago
dyism        mod_tile.3          my-registry:5050/mod_tile:latest   VM2           Running             Running 15 hours ago

Upvotes: 0

Views: 129

Answers (1)

BMitch
BMitch

Reputation: 263499

It looks like you have a second service defined with the name "vigilant_kare", possibly automatically named if you didn't provide a name.

Swarm mode will automatically correct a down or deleted container to return you to the target state. To delete a container managed by swarm mode, you need to delete the service that manages it:

docker service rm vigilant_kare

Upvotes: 1

Related Questions