Reputation: 11
Project structure:
proj:
my_app:
__init__.py
tasks.py
settings:
__init__.py
base_settings.py
settings.py
prod_settings.py
celery_config.py
celery.py
tasks.py
from settings import celery_app
@celery_app.task(bind=True, max_retries=5, default_retry_delay=3,name='my_app.tasks.send_mail')
def send_email_task(self, message, subject, recipient_list):
send_mail_func()
celery_config.py
from os import getenv as env
broker_url = env('CELERY_BROKER_URL', 'not_value')
result_backend = env('CELERY_RESULT_BACKEND', 'not_value')
timezone = env('CELERY_TIMEZONE', 'UTC')
task_serializer = 'json'
accept_content = ['json']
result_serializer = 'json'
enable_utc = False
broker_connection_retry_on_startup = True
celery.py:
from celery import Celery
celery_app = Celery('app_celery')
celery_app.config_from_object('settings.celery_config')
celery_app.autodiscover_tasks()
docker-compose:
rabbitmq:
restart: always
env_file:
- .env
volumes:
- ~/.docker-conf/rabbitmq/data/:/var/lib/rabbitmq/
- ~/.docker-conf/rabbitmq/log/:/var/log/rabbitmq
healthcheck:
test: [ "CMD-SHELL", "rabbitmqctl status" ]
interval: 30s
timeout: 10s
retries: 5
command: >
bash -c "
rabbitmq-plugins enable rabbitmq_management &&
rabbitmq-server &
sleep 10 &&
rabbitmqctl await_startup &&
rabbitmqctl add_vhost my_vhost &&
rabbitmqctl set_user_tags ${RABBITMQ_DEFAULT_USER} administrator &&
rabbitmqctl set_permissions -p my_vhost ${RABBITMQ_DEFAULT_USER} '.*' '.*' '.*' &&
wait
"
backend:
restart: always
volumes:
- static:/static/
- media:/media/
- /home/user/logs/gunicorn:/var/log/gunicorn
- /home/user/logs/django:/var/log/django
env_file:
- .env
command: bash -c "
export DJANGO_SETTINGS_MODULE=settings.prod_settings
&& python manage.py migrate --settings=settings.prod_settings
&& python manage.py collectstatic --noinput --settings=settings.prod_settings
&& python manage.py create_admin ${ADMINER_ADMIN_EMAIL} ${ADMINER_ADMIN_PASSWORD} --settings=settings.prod_settings
&& gunicorn settings.wsgi -b 0.0.0.0:8080 --workers 2 --access-logfile /var/log/gunicorn/access.log --error-logfile /var/log/gunicorn/error.log
"
depends_on:
pg:
condition: service_healthy
celery_worker:
restart: always
env_file:
- .env
volumes:
- /home/user/logs/django:/var/log/django
command: bash -c "
sleep 10 && \
export DJANGO_SETTINGS_MODULE=settings.prod_settings
&& celery -A settings worker --loglevel=DEBUG --concurrency=2 --pool=gevent --hostname=worker1
"
depends_on:
rabbitmq:
condition: service_healthy
celery_beat:
restart: always
env_file:
- .env
volumes:
- /home/user/logs/django:/var/log/django
command: bash -c "
export DJANGO_SETTINGS_MODULE=settings.prod_settings
&& celery -A settings beat --loglevel=INFO
"
depends_on:
rabbitmq:
condition: service_healthy
the problem is that the Celery worker is running in the local environment. But in the product environment, tasks do not go away even in RabbitMQ, they are always with the PENDING status.
I connect to a docker container with a backend and watch celery_app. But for some reason it has empty broker_url = None and backend = None. It seems that the settings are not applied to the Celery instance and therefore the tasks do not go to RabbitMQ.
I tried to submit settings in capital letters, add the CELERY_ prefix, and submit settings via .update() - nothing helps. I sincerely don't understand what the problem might be. Thanks!
Upvotes: 0
Views: 22