Tits
Tits

Reputation: 21

Why gunicorn don't use all ressources but deliver slowly pages

I've a setup with :

When the traffic is "heavy" (100 r/s), the page is very slow to be delivered, even if all containers are not very used (cpu 40% in Idle on application container, only use 2gb of 8 RAM used - other container more or less 0 % of CPU usage).

I've already set Gunicorn worker to 16, I've check linux limit (fd, socket) and everything seems OK, so why it doesn't scale up ?

nginx.conf (key value)

worker_processes  auto;

events {
    use epoll;
    worker_connections  4096;
}
http {
    sendfile        on;
    keepalive_timeout  10;
}

Gunicorn start :

gunicorn --bind 0.0.0.0:80 --workers 16 --max-requests 1000 ****.wsgi:application

Upvotes: 0

Views: 1643

Answers (1)

Virtuozzo
Virtuozzo

Reputation: 1993

so why it doesn't scale up?

It's better if worker_processes is set to auto. This means that Nginx forks workers by itself depending on number of physical cores available for the process. In such a way, Jelastic automatically controls workers during vertical resource scaling when number of cores can be changed. If you have 4 cores, for example (number of cores are depending on number of cloudlets in latest Jelastic versions), no sense to set a lot of workers since they will be in an idle state and won't get utilization control. Actually, CPU could be not an issue here. Even with those workers you had, CPU could be underloaded enough and bottleneck could be in Network/RAM/Disk, etc. Thus, CPU could be not loaded enough to trigger a scaling process.

I've already set Gunicorn worker to 16

Usually, number of workers = number of cores. The only reason for such a high value you have set, can be the same number of cores but unlikely you have 16 cores inside your container.

In short, if you'll provide us with a hosting provider and environment names, we will be able to take a look closer.

Upvotes: 1

Related Questions