Brian Hicks
Brian Hicks

Reputation: 6413

Celery Queue Hanging

I've got one specific job that seems to hang my celery workers every so often. I'm using rabbitmq as a broker. I've tried a couple things to fix this, to no avail:

So I've come up a little short on what's causing this problem, and how I can fix it. Can anyone give me any pointers? The task in question is simply inserting a record into the database (MongoDB in this case.)

Update: I've added CELERYD_FORCE_EXECV. We'll see if that fixes it. Update 2: nope!

Upvotes: 1

Views: 1649

Answers (2)

Bolke de Bruin
Bolke de Bruin

Reputation: 760

You are most likely hitting a infinite loop bug in Celery / Kombu (see https://github.com/celery/celery/issues/3712) that only got fixed very recently. It has not gotten into a release yet. See commit https://github.com/celery/kombu/pull/760 for details. If you cannot use a repo build for your installation a work around is to either switch to Redis or set CELERY_WORKER_PREFETCH_MULTIPLIER=0 and -P solo for now.

Upvotes: 1

asksol
asksol

Reputation: 19479

A specific job making the child processes hang is often a symptom of IO that never completes, e.g. a web request or socket read without a timeout.

Most libraries supports setting a timeout, but if not you can always use socket.setdefaulttimeout:

import socket

@task
def http_get(url, timeout=1.0, retry_after=3.0, max_retries=None):
    prev_timeout = socket.getdefaulttimeout()
    socket.setdefaulttimeout(timeout)
    try:
        return requests.get(url)
    except socket.timeout:
        raise http_get.retry(exc=exc, countdown=retry_after, max_retries=max_retries)
    finally:
        socket.setdefaulttimeout(prev_timeout)

Upvotes: 1

Related Questions