mehdi
mehdi

Reputation: 1150

Daemonization celery in production

I'm trying to run Celery as service in Ubuntu 18.04, using Django 2.1.1 and Celery 4.1.1 that is installed in env and Celery 4.1.0 installed systemwide. I'm going forward with this tutorial to run Celery as service. Here is my Django's project tree:

    hamclassy-backend
                 ├── apps
                 ├── env
                 │   └── bin
                 │        └── celery
                 ├── hamclassy
                 │   ├── celery.py
                 │   ├── __init__.py
                 │   ├── settings-dev.py
                 │   ├── settings.py
                 │   └── urls.py
                 ├── wsgi.py
                 ├── manage.py
                 └── media

Here is celery.py :

from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'hamclassy.settings')

app = Celery('hamclassy')

app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()

Here is /etc/systemd/system/celery.service:

[Unit]
Description=Celery Service
After=network.target

[Service]
Type=forking
User=classgram
Group=www-data
EnvironmentFile=/etc/conf.d/celery
WorkingDirectory=/home/classgram/www/hamclassy-backend
ExecStart=/bin/sh -c '${CELERY_BIN} multi start ${CELERYD_NODES} \
  -A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} \
  --logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'
ExecStop=/bin/sh -c '${CELERY_BIN} multi stopwait ${CELERYD_NODES} \
  --pidfile=${CELERYD_PID_FILE}'
ExecReload=/bin/sh -c '${CELERY_BIN} multi restart ${CELERYD_NODES} \
  -A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} \
  --logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'

[Install]
WantedBy=multi-user.target


and /etc/conf.d/celery:

CELERYD_NODES="celery-worker"

CELERY_BIN="/home/classgram/www/env/bin/celery"

# App instance to use
CELERY_APP="hamclassy"


CELERYD_MULTI="multi"
# Extra command-line arguments to the worker
CELERYD_OPTS="--time-limit=300 --concurrency=8"

# %n will be replaced with the first part of the nodename.
CELERYD_LOG_FILE="/var/log/celery/%n%I.log"
CELERYD_PID_FILE="/var/run/celery/%n.pid"
CELERYD_LOG_LEVEL="INFO"

When I run service with systemctl start celery.service the following error appears:

Job for celery.service failed because the control process exited with error code. See "systemctl status celery.service" and "journalctl -xe" for detail

When I run sudo journalctl -b -u celery the below log appears:

Starting Celery Service... celery.service: Control process exited, code=exited status=200 celery.service: Failed with result 'exit-code'. Failed to start Celery Service.

For more information:
1) When I use groups classgram this appears: www-data sudo.
2) I can run Celery with celery -A hamclassy worker -l info after activating environment very well.

Thank you

Upvotes: 6

Views: 8715

Answers (1)

DejanLekic
DejanLekic

Reputation: 19787

OK, now that we know what was the error (from your comment), the following few steps will most likely solve your problem:

  1. Create a virtual environment for your Celery: python3 -m venv /home/classgram/venv

  2. Your enviroment file either has to export PYTHONPATH=/home/classgram/www/hamclassy-backend, which is the simplest solution, good for testing, and development. However, for production I recommend you to build a Python package (wheel), and install it in your virtual environment.

  3. Modify your systemd service file (and possibly the environment file too) so that you simply execute Celery directly from the Python virtual environment we created above. It should run Celery this way (or similar): /home/classgram/venv/bin/celery multi start ${CELERYD_NODES} -A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} --logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}. Sure, you can make it shorter by setting CELERY_BIN to /home/classgram/venv/bin/celery.

Upvotes: 7

Related Questions