Reputation: 2087
I have a web application using Django and i am using Celery for some asynchronous tasks processing.
For Celery, i am using Rabbitmq as a broker, and Redis as a result backend.
Rabbitmq and Redis are running on the same Ubuntu 14.04 server hosted on a local virtual machine.
Celery workers are running on remote machines (Windows 10) (no worker are running on the Django server).
i have three issues (i think they are related somehow !).
reject requeue=False: [WinError 10061] No connection could be made because the target machine actively refused it
i am also confused about my settings, and i don't know exactly where this issues might come from !
so here is my settings so far:
# region Celery Settings
CELERY_CONCURRENCY = 1
CELERY_ACCEPT_CONTENT = ['json']
# CELERY_RESULT_BACKEND = 'redis://:C@pV@[email protected]:6379/0'
BROKER_URL = 'amqp://soufiaane:C@pV@[email protected]:5672/cvcHost'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TASK_SERIALIZER = 'json'
CELERY_ACKS_LATE = True
CELERYD_PREFETCH_MULTIPLIER = 1
CELERY_REDIS_HOST = 'cvc.ma'
CELERY_REDIS_PORT = 6379
CELERY_REDIS_DB = 0
CELERY_RESULT_BACKEND = 'redis'
CELERY_RESULT_PASSWORD = "C@pV@lue2016"
REDIS_CONNECT_RETRY = True
AMQP_SERVER = "cvc.ma"
AMQP_PORT = 5672
AMQP_USER = "soufiaane"
AMQP_PASSWORD = "C@pV@lue2016"
AMQP_VHOST = "/cvcHost"
CELERYD_HIJACK_ROOT_LOGGER = True
CELERY_HIJACK_ROOT_LOGGER = True
CELERYBEAT_SCHEDULER = 'djcelery.schedulers.DatabaseScheduler'
# endregion
from __future__ import absolute_import
from django.conf import settings
from celery import Celery
import django
import os
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'my_app.settings')
django.setup()
app = Celery('CapValue', broker='amqp://soufiaane:C@pV@[email protected]/cvcHost', backend='redis://:C@pV@[email protected]:6379/0')
# Using a string here means the worker will not have to
# pickle the object when using Windows.
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
@app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
from __future__ import absolute_import
# This will make sure the app is always imported when
# Django starts so that shared_task will use this app.
from .celery_settings import app as celery_app
from __future__ import absolute_import
from my_app.celery_settings import app
# here i only define the task skeleton because i'm executing this task on remote workers !
@app.task(name='email_task', bind=True, max_retries=3, default_retry_delay=1)
def email_task(self, job, email):
try:
print("x")
except Exception as exc:
self.retry(exc=exc)
on the workers side i have one file 'tasks.py' which have the actual implementation of the task:
from __future__ import absolute_import
from celery.utils.log import get_task_logger
from celery import Celery
logger = get_task_logger(__name__)
app = Celery('CapValue', broker='amqp://soufiaane:C@pV@[email protected]/cvcHost', backend='redis://:C@pV@[email protected]:6379/0')
@app.task(name='email_task', bind=True, max_retries=3, default_retry_delay=1)
def email_task(self, job, email):
try:
"""
The actual implementation of the task
"""
except Exception as exc:
self.retry(exc=exc)
what i did notice though is:
What could be possibly causing me those problems ?
on my Redis server, i already enabled remote connection
... bind 0.0.0.0 ...
Upvotes: 24
Views: 25322
Reputation: 2141
I had a setup where the 'single instance servers' (the dev and locahost servers) were working, but not when the redis server was another server. The celery tasks were working correctly, but not the getting result. I was getting the following error just when trying to get the result of a task:
Error 111 connecting to localhost:6379. Connection refused.
What made it work was simply adding that setting to Django:
CELERY_RESULT_BACKEND = 'redis://10.10.10.10:6379/0'
It seems like if this parameter isn't present, it'll default to localhost to get the result of the tasks.
Upvotes: 3
Reputation: 500
Adding CELERY_STORE_ERRORS_EVEN_IF_IGNORED = True
to my settings resolved this problem for me.
Upvotes: 2
Reputation: 410
My guess is that your problem is in the password.
Your password has @
in it, which could be interpreted as a divider between the user:pass
and the host
section.
The workers stay in pending because they could not connect to the broker correctly. From celery's documentation http://docs.celeryproject.org/en/latest/userguide/tasks.html#pending
PENDING Task is waiting for execution or unknown. Any task id that is not known is implied to be in the pending state.
Upvotes: 11