Reputation: 131
is there any to limit swarm from creating for example 20 number of containers per worker. so, one worker wouldn't have more than 20 containers for better QoS (Quality of service), this'll also prevent overcommitting host's resources?
thanks
Upvotes: 5
Views: 9092
Reputation: 8441
EDIT
This is now implemented and it will be released as part of Docker 19.03.
You can see how it works with stack on docker/cli#1410 and with without stack (docker service ...) on docker/cli#1612
Actually no.
There is a Github issue who is talking about it.
Upvotes: 2
Reputation: 161614
For docker-ce 19.03+, you can simply create/update your service with the --replicas-max-per-node
option:
# create service with replicas-max-per-node=10
docker service create --replicas=100 --replicas-max-per-node=10 --name your_service_name your_image_name
# update service with replicas-max-per-node=20
docker service update --replicas-max-per-node=20 your_service_name
- Add 3.8 compose version
Limit service scale to the size specified by the field deploy.placement.max_replicas_per_node
Upvotes: 8
Reputation: 61
You can't exactly say "Limit to 20 containers" per node. If memory is your issue you can look into using limits and reservation in your stack files. This will tell the scheduler to not schedule containers on the node if there isn't enough memory. There are also CPU reservation/limits you can set but that doesn't seem to influence the schedules.
This link on compose is helpful in learning about limits and reservation https://docs.docker.com/compose/compose-file/#resources
my-java-service:
image: yourcompany.com/my-java-service:1.1.0
environment:
- JAVA_OPTS=-Xmx4096m -Xms4096m -XX:+UseG1GC -XX:+UseStringDeduplication -XX:-TieredCompilation -XX:+ParallelRefProcEnabled
deploy:
mode: replicated
placement:
constraints:
- node.labels.env.lifecyle==prod
replicas: 40
resources:
reservations:
memory: 5120M
update_config:
delay: 1m
parallelism: 1
restart_policy:
condition: none
Upvotes: 4