Reputation: 14823
I am unable to specify CPU and memory limitation for services specified in version 3.
With version 2 it works fine with mem_limit
& cpu_shares
parameters under the services. But it fails while using version 3, putting them under deploy
section doesn't seem worthy unless I am using swarm mode.
Can somebody help?
version: "3"
services:
node:
build:
context: .
dockerfile: ./docker-build/Dockerfile.node
restart: always
environment:
- VIRTUAL_HOST=localhost
volumes:
- logs:/app/out/
expose:
- 8083
command: ["npm","start"]
cap_drop:
- NET_ADMIN
- SYS_ADMIN
Upvotes: 322
Views: 574275
Reputation: 195
The question is about version 3, but now that the modern docker compose
replaces version 3 I will provide an answer for that.
The new compose spec (without any version) has a simple mem_limit
option:
services:
image: example
mem_limit: 1G
cpu_count: 1
See: https://github.com/compose-spec/compose-spec/blob/main/spec.md
Upvotes: 0
Reputation: 2394
deploy:
resources:
limits:
cpus: '0.001'
memory: 50M
reservations:
cpus: '0.0001'
memory: 20M
More: https://docs.docker.com/compose/compose-file/compose-file-v3/#resources
or current compose version:https://docs.docker.com/compose/compose-file/deploy/#resources
In you specific case:
version: "3"
services:
node:
image: USER/Your-Pre-Built-Image
environment:
- VIRTUAL_HOST=localhost
volumes:
- logs:/app/out/
command: ["npm","start"]
cap_drop:
- NET_ADMIN
- SYS_ADMIN
deploy:
resources:
limits:
cpus: '0.001'
memory: 50M
reservations:
cpus: '0.0001'
memory: 20M
volumes:
- logs
networks:
default:
driver: overlay
Note:
Also Note: Networks in Swarm mode do not bridge. If you would like to connect internally only, you have to attach to the network. You can 1) specify an external network within an other compose file, or have to create the network with --attachable parameter (docker network create -d overlay My-Network --attachable) Otherwise you have to publish the port like this:
ports:
- 80:80
Upvotes: 179
Reputation: 928
This is possible with version >= 3.8
. Here is an example using docker-compose >= 1.28.x
:
version: '3.9'
services:
app:
image: nginx
cpus: "0.5"
mem_reservation: "10M"
mem_limit: "250M"
Proof of it working (see the MEM USAGE) column :
The expected behavior when reaching memory limit is the container getting killed. In this case, whether set restart: always
or adjust your app code.
Limits and restarts settings in Docker compose v3 should now be set using :
deploy:
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
resources:
limits:
cpus: '0.50'
memory: 50M
reservations:
cpus: '0.25'
memory: 20M
Upvotes: 45
Reputation: 9072
Docker Compose v1 does not support the deploy
key. It's only respected when you use your version 3 YAML file in a Docker Stack.
This message is printed when you add the deploy
key to you docker-compose.yml
file and then run docker-compose up -d
WARNING: Some services (database) use the 'deploy' key, which will be ignored. Compose does not support 'deploy' configuration - use
docker stack deploy
to deploy to a swarm.
The documentation (https://docs.docker.com/compose/compose-file/#deploy) says:
Specify configuration related to the deployment and running of services. This only takes effect when deploying to a swarm with docker stack deploy, and is ignored by docker-compose up and docker-compose run.
Nevertheless you can use Docker Compose v2. Given the following Docker composition you can use the deploy
key to limit your containers resources.
version: "3.8"
services:
database:
image: mariadb:10.10.2-jammy
container_name: mydb
environment:
MYSQL_ROOT_PASSWORD: root_secret
MYSQL_DATABASE: mydb
MYSQL_USER: myuser
MYSQL_PASSWORD: secret
TZ: "Europe/Zurich"
MARIADB_AUTO_UPGRADE: "true"
tmpfs:
- /var/lib/mysql:rw
ports:
- "127.0.0.1:3306:3306"
deploy:
resources:
limits:
cpus: "4.0"
memory: 200M
networks:
- mynetwork
When you run docker compose up -d
(Note: in version 2 of Docker Compose you call the docker binary at not the docker-compose
python application) and then inspect the resources you see that the memory is limited to 200 MB. The CPU limit is not exposed by docker stats
.
❯ docker stats --no-stream
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
2c71fb8de607 mydb 0.04% 198MiB / 200MiB 99.02% 2.67MB / 3.77MB 70.6MB / 156MB 18
Upvotes: 85
Reputation: 2373
I think there is confusion here over using docker-compose
and docker compose
(with a space). You can install the compose plugin using https://docs.docker.com/compose/install if you don't already have it.
Here is an example compose file just running Elasticsearch
version: "3.7"
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.15.2
restart: always
ports:
- "9222:9200"
deploy:
resources:
limits:
cpus: "4"
memory: "2g"
environment:
- "node.name=elasticsearch"
- "bootstrap.memory_lock=true"
- "discovery.type=single-node"
- "xpack.security.enabled=false"
- "ingest.geoip.downloader.enabled=false"
I have it in a directory called estest
the file is called es-compose.yaml
. The file sets CPU and memory limits.
If you launch using docker-compose
e.g.
docker-compose -f es-compose.yaml up
Then look at docker stats
you see
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
e3b6253ee730 estest_elasticsearch_1 342.13% 32.39GiB / 62.49GiB 51.83% 7.7kB / 0B 27.3MB / 381kB 46
so the cpu and memory resource limits are ignored. During the launch you see the warning
WARNING: Some services (elasticsearch) use the 'deploy' key, which will be ignored. Compose does not support 'deploy' configuration - use `docker stack deploy` to deploy to a swarm.
Which I think is what leads people to look at Docker stack/swarm. However if you just switch to using the newer docker compose
now built in to the docker CLI https://docs.docker.com/engine/reference/commandline/compose/ e.g.
docker compose -f es-compose.yaml up
And look again at docker stats
you see
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
d062eda10ffe estest-elasticsearch-1 0.41% 1.383GiB / 2GiB 69.17% 8.6kB / 0B 369MB / 44MB 6
Therefore the limits have been applied.
This is better in my opinion than swarm as it still allows you to build containers as part of the compose project and pass environment easily via a file. I would recommend removing docker-compose
and switching over to use the newer docker compose
wherever possible.
Upvotes: 22
Reputation: 61
I have other experiences, maybe somebody can explain this.
Maybe this is bug(i think this is a feature), but, I am able to use deployments limits (memory limits) in docker-compose without swarm, hovever CPU limits doesn't work but replication does.
$> docker-compose --version
docker-compose version 1.29.2
$> docker --version
Docker version 20.10.12
version: '3.2'
services:
limits-test:
image: alexeiled/stress-ng
command: [
'--vm', '1', '--vm-bytes', '20%', '--vm-method', 'all', '--verify', '-t', ' 10m', '-v'
]
deploy:
resources:
limits:
cpus: '0.50'
memory: 1024M
Docker stats
b647e0dad247 dc-limits_limits-test_1 0.01% 547.1MiB / 1GiB 53.43% 942B / 0B 0B / 0B 3
Edited, thx @Jimmix
Upvotes: 4
Reputation: 3480
I know the topic is a bit old and seems stale, but anyway I was able to use these options:
deploy:
resources:
limits:
cpus: '0.001'
memory: 50M
when using 3.7 version of docker-compose
What helped in my case, was using this command:
docker-compose --compatibility up
--compatibility
flag stands for (taken from the documentation):
If set, Compose will attempt to convert deploy keys in v3 files to their non-Swarm equivalent
Think it's great, that I don't have to revert my docker-compose file back to v2.
Upvotes: 308