Reputation: 519
I deployed my django project to the AWS ECS service using the docker. And to use celery I set rabbitmq as a separate ec2 server (two ec2 with brocker and result backend).
The problem is that the celery worker works locally but not on AWS.
When I typing docker run -rm -it -p 8080: 80 proj
command in local, worker is working.
But when i deploy app on ECS, worker does not working. So I must create a worker with celery -A mysite worker -l INFO
in my local django project. Despite setting the supervisor to run worker.
Below is my code.
FROM ubuntu:16.04
# A layer is created for each command ex)RUN, ENV, COPY, etc...
RUN apt-get -y update
RUN apt-get -y install python3 python3-pip
RUN apt-get -y install nginx
RUN apt-get -y install python-dev libpq-dev
RUN apt-get -y install supervisor
WORKDIR /srv
RUN mkdir app
COPY . /srv/app
WORKDIR /srv/app
RUN pip3 install -r requirements.txt
RUN pip3 install uwsgi
ENV DEBUG="False" \
STATIC="s3" \
REGION="Tokyo"
COPY .conf/uwsgi-app.ini /etc/uwsgi/sites/app.ini
COPY .conf/nginx.conf /etc/nginx/nginx.conf
COPY .conf/nginx-app.conf /etc/nginx/sites-available/app.conf
COPY .conf/supervisor-app.conf /etc/supervisor/conf.d/
COPY .conf/docker-entrypoint.sh /
RUN ln -s /etc/nginx/sites-available/app.conf /etc/nginx/sites-enabled/app.conf
EXPOSE 80
CMD supervisord -n
ENTRYPOINT ["/docker-entrypoint.sh"]
[program:uwsgi]
command = uwsgi --ini /etc/uwsgi/sites/app.ini
[program:nginx]
command = nginx
[program:celery]
directory = /srv/app/django_app/
command = celery -A mysite worker -l INFO --concurrency=6
numprocs=1
stdout_logfile=/var/log/celery-worker.log
stderr_logfile=/var/log/celery-worker.err
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998
# CELERY STUFF
BROKER_URL = 'amqp://user:[email protected]//'
CELERY_RESULT_BACKEND = 'amqp://user:[email protected]//'
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
import os
from celery import Celery
from django.conf import settings
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'mysite.settings')
app = Celery('mysite')
# Using a string here means the worker will not have to
# pickle the object when using Windows.
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
@app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
from celery.task import task
from celery.utils.log import get_task_logger
from .helpers import send_password_email
logger = get_task_logger(__name__)
@task(name="send_password_email_task")
def send_password_email_task(email, password):
"""Send an email when user when a user requests to find a password"""
logger.info("Sent feedback email")
return send_password_email(email, password)
user root;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
daemon off;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
server_names_hash_bucket_size 512;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
#mail {
# # See sample authentication script at:
# # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript
#
# # auth_http localhost/auth.php;
# # pop3_capabilities "TOP" "USER";
# # imap_capabilities "IMAP4rev1" "UIDPLUS";
#
# server {
# listen localhost:110;
# protocol pop3;
# proxy on;
# }
#
# server {
# listen localhost:143;
# protocol imap;
# proxy on;
# }
#}
server {
listen 80;
server_name localhost ~^(.+)$;
charset utf-8;
client_max_body_size 128M;
location / {
uwsgi_pass unix:///tmp/app.sock;
include uwsgi_params;
}
}
Upvotes: 4
Views: 5627
Reputation: 519
Too simple to solve the problem. The problem was that I set the broker node to localhost. There was no problem with the app code.
Upvotes: 2
Reputation: 47
If you want to run a celery worker in ECS by default containers are executed by root user, and to use worker in a root environment you should setup an env var to do so:
http://docs.celeryproject.org/en/latest/userguide/daemonizing.html
When you define the vars in the task definitio set C_FORCE_ROOT = true.
Upvotes: 0