GUNTER
GUNTER

Reputation: 129

Celery tasks not running in docker-compose

I have a docker-compose where there are three components: app, celery, and redis. These are implemented in DjangoRest.

I have seen this question several times on stackoverflow and have tried all the solutions listed. However, the celery task is not running.

The behavior that celery has is the same as the app, that is, it is starting the django project, but it is not running the task.

docker-compose.yml

version: "3.8"
services:
  app:
    build: .
    volumes:
      - .:/django
    ports:
      - 8000:8000
    image: app:django
    container_name: myapp
    command: python manage.py runserver 0.0.0.0:8000
    depends_on:
      - redis
  redis:
    image: redis:alpine
    container_name: redis
    ports:
      - 6379:6379
    volumes:
      - ./redis/data:/data
    restart: always
    environment:
      - REDIS_PASSWORD=
    healthcheck:
      test: redis-cli ping
      interval: 1s
      timeout: 3s
      retries: 30

  celery:
    image: celery:3.1
    container_name: celery
    restart: unless-stopped
    build:
      context: .
      dockerfile: Dockerfile
    command: celery -A myapp worker -l INFO -c 8
    volumes:
      - .:/django
    depends_on:
      - redis
      - app
    links:
      - redis

DockerFile

FROM python:3.9

RUN useradd --create-home --shell /bin/bash django
USER django

ENV DockerHOME=/home/django

RUN mkdir -p $DockerHOME
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
ENV PIP_DISABLE_PIP_VERSION_CHECK 1

USER root
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -
RUN sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list'
RUN apt-get -y update
RUN apt-get install -y google-chrome-stable

USER django
WORKDIR /home/django
COPY requirements.txt ./

# set path
ENV PATH=/home/django/.local/bin:$PATH

# Upgrade pip and install requirements.txt
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
COPY . .

EXPOSE 8000

# entrypoint
ENTRYPOINT ["/bin/bash", "-e", "docker-entrypoint.sh"]

docker-entrypoint.sh

# run migration first
python manage.py migrate

# create test dev user and test superuser
echo 'import create_test_users' | python manage.py shell

# start the server
python manage.py runserver 0.0.0.0:8000

celery.py

from __future__ import absolute_import
import os
from celery import Celery
from django.conf import settings

os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'myapp.settings')
app = Celery('myapp', broker='redis://redis:6379')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)

@app.task(bind=True)
def debug_task(self):
    print('Request: {0!r}'.format(self.request))

settings.py

CELERY_BROKER_URL     = os.getenv('REDIS_URL') # "redis://redis:6379"
CELERY_RESULT_BACKEND = os.getenv('REDIS_URL') # ""redis://redis:6379"
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TIMEZONE = 'Africa/Nairobi'

Upvotes: 2

Views: 6245

Answers (1)

David Maze
David Maze

Reputation: 159875

Your docker-entrypoint.sh script unconditionally runs the Django server. Since you declare it as the image's ENTRYPOINT, the Compose command: is passed to it as arguments but your script ignores these.

The best way to fix this is to pass the specific command – "run the Django server", "run a Celery worker" - as the Dockerfile CMD or Compose command:. The entrypoint script ends with the shell command exec "$@" to run that command.

#!/bin/sh
python manage.py migrate
echo 'import create_test_users' | python manage.py shell

# run the container CMD
exec "$@"

In your Dockerfile you need to declare a default CMD.

ENTRYPOINT ["./docker-entrypoint.sh"]
CMD python manage.py runserver 0.0.0.0:8000

Now in your Compose setup, if you don't specify a command:, it will use that default CMD, but if you do, that will be run instead. In both cases your entrypoint script will run but when it gets to the final exec "$@" line it will run the provided command.

That means you can delete the command: override from your app container. (You do need to leave it for the Celery container.) You can simplify this setup further by removing the image: and container_name: settings (Compose will pick reasonable defaults for both of these) and the volumes: mount that hides the image content.

Upvotes: 3

Related Questions