Erez
Erez

Reputation: 1430

Celery doesn't utilize all cores, when using redis or rabbitmq

I'm writing a build-server using django & celery, that executes many long tasks. It runs on a 4-core server with centos.

Initially, I used the default django broker (BROKER_URL = 'django://'), and it worked perfectly. But I wanted events and monitoring, so I tried to switch to either redis or rabbitmq.

Both redis and rabbitmq work, in the sense that they execute all the tasks eventually, but unlike the django broker, they sometimes utilize only some of the cores, sometimes even just 1 (!) while the other cores rest idly.

I wish to stress out once more that the django broker always utilizes all the cores properly, and the only thing I change for that effect is the broker.

My configuration is standard (except for having two queues):

CELERY_DEFAULT_QUEUE = 'default'
CELERY_QUEUES = ( 
    Queue('default',    routing_key='task.#'), 
    Queue('new_batches', routing_key='new_batch.#'), 
)
CELERY_ACKS_LATE = True
CELERYD_PREFETCH_MULTIPLIER = 1

Django (perfect)

BROKER_URL = 'django://'

Redis (mishaving)

BROKER_URL = 'redis://localhost:6379/0'

Rabbit (mishaving)

BROKER_URL = 'amqp://user:pass@localhost:5672/scourge'
CELERY_BACKEND = "amqp"
CELERY_RESULT_BACKEND = "amqp"
CELERY_TASK_RESULT_EXPIRES = 60*60 * 24*7   # results expire after a week 

Any idea why this is happening? Thanks

Upvotes: 2

Views: 454

Answers (1)

Erez
Erez

Reputation: 1430

After playing around with different configurations, the -Ofair switch solved my problems.

python manage.py celery worker -E -Ofair

I'm not sure exactly what is the underlying mechanism, or if there's a more correct solution, but I'm just happy that it works.

Upvotes: 1

Related Questions