Kenny Tai Huynh
Kenny Tai Huynh

Reputation: 1599

Set CPU limit in Docker Compose

I got a on-prem server which I would like to deploy many micro-services. I'm using a docker-compose file to declare all services and would like to set the cpu limit. I refer the docs below: https://docs.docker.com/compose/compose-file/ My docker compose file is like:

version: "3.7"
services:
  redis:
    image: redis:alpine
    deploy:
      resources:
        limits:
          cpus: '2'
          memory: 256M
        reservations:
          cpus: '0.25'
          memory: 64M
  service1:
    image: service1 image
    deploy:
      resources:
        limits:
          cpus: '2'
          memory: 512M
        reservations:
          cpus: '0.25'
          memory: 64M

...

I'm confused to calculate the cpus limits. For example, the cpu is 8 cores. There are 20 microservices. Is there any way I can calculate the cpu limit of each service? Or any formulas to do so?

----- UPDATE ------ To make it clearer, my main point here is CPU limit. It is because I'd like to send alert if the CPU of one microservice is using 80% of CPU for that microservice. If I don't set the cpu limit, is it true that the microservice CPU usage will be the same with host CPU usage? I don't use Docker's Swarn but Docker only.

Any ideas are really appreciated.

Thanks,

Upvotes: 10

Views: 38748

Answers (2)

blissweb
blissweb

Reputation: 3855

Welcome to 2022. I'm using version 1.29 of docker-compose with a yaml version of 3.9

If I set my yml file as follows:

version: '3.9'

services:

    astro-cron:
        build: ./cron
        image: astro-cron
        restart: unless-stopped
        volumes:
          - /home/docker-sync/sites/com/cron:/astro/cron
        environment:
          - TZ=America/Phoenix
        mem_limit: 300m
        mem_reservation: 100m
        cpus: 0.3

It nicely limits the container cpu to 30% of the machine. The memory limits kill the container if exceeded, which will auto-restart.

The cpu limit does not kill anything, just holds.

I'm not using docker swarm.

PS. To address the other answer of having an overcommitted cpu not being a problem. It IS a problem if one of your containers is an unimportant background job, and another is a customer facing web site or database. Unless someone can explain why that wouldn't be the case.

Upvotes: 18

David Maze
David Maze

Reputation: 158908

Having overcommitted CPU isn't really a problem. If you have 16 processes and they're all trying to do work requiring 100% CPU on an 8-core system, the kernel will time-share across the processes; in practice you should expect those processes to get 50% CPU each and the task to just take twice as long. One piece of advice I've heard (in a Kubernetes space) is to set CPU requests based on the steady-state load requirement of the service, and to not set CPU limits.

There's no magic formula to set either of these numbers. The best way is to set up some metrics system like Prometheus, run your combined system, and look at the actual resource utilization. In some cases it's possible to know that a process is single-threaded and will never use more than 1 core, or that you expect a process to be I/O bound and if it's not it should get throttled, but basing these settings on actual usage is probably better.

(Memory is different, in that you can actually run out of physical memory, but also that processes are capable of holding on to much more memory than they actually need. Again a metrics tool will be helpful here.)

Your question suggests a single host. The deploy: section only works with Docker's Swarm cluster manager. If you're not using Swarm then you need to use a version: '2' docker-compose.yml file, which has a different set of resource constraint declarations (and mostly doesn't have the concept of "reservations"). For example,

version: '2'
services:
  redis:
    image: redis:alpine
    # cpus: 2.0
    mem_limit: 256m
    mem_reservation: 64m

Upvotes: 4

Related Questions