Reputation: 624
I've Celery configuration with Django. And I'm looking for a way to run task which is expected to be executed during the whole cycle of running application. Is this the case I use Celery worker or there are some other options to start such long running task in parallel with Django server?
Also I want to be able to access database from task and monitor it with supervisord in case it fails to restart it.
Upvotes: 1
Views: 4071
Reputation: 2035
In your supervisord.ini
[group:yourproject-staging]
programs=yourproject-staging-uwsgi,yourproject-staging-celery
[program:yourproject-staging-uwsgi]
command=/data/www/yourproject-staging/bin/uwsgi --ini /data/www/yourproject-staging/conf/yourproject.staging.uwsgi.ini
user=www-data
autostart=true
autorestart=true
startsecs=5
priority=1100
killasgroup=true
[program:yourproject-staging-celery]
command=/data/www/yourproject-staging/bin/celery -A yourproject worker --loglevel=INFO
directory=/data/www/yourproject-staging
environment=E=staging
user=www-data
autostart=true
autorestart=true
startretries=2
exitcodes=0
stopasgroup=true
killasgroup=true
startsecs=5
priority=850
stdout_logfile=/data/log/yourproject-staging/celery_worker.log
stderr_logfile=/data/log/yourproject-staging/celery_error.log
This will start your celery worker on the staging environment.
Check if you have installed Redis though, because I didn't and I got stuck for a couple of hours.
Upvotes: 0
Reputation: 11932
I see two questions:
How to start celery task when django finished startup
See the point 2 bellow, but before, see 0 and 1.
Run task which is expected to be executed during the whole cycle of running application.
How there is not an use case, I see three interpretations:
-0. How to run celery together with runserver
or uwsgi, gunicorn, mod_wsgi, etc
You need to run different commands for different processes, one for the web server process other for the celery workers, the way the link each other is thru the broker, when the broker is the same, django sends a task to it and celery pulls, use a process manager like supervisord to manage the processes and the same broker for make them speak.
After you run the celery task with supervisord, and you need a task to start on each request/response, use the Request/response signals to call the function with the respective celery @task
decorator.
python manage.py runserver
is runned until python manage.py runserver
is terminated. (this is different than point #1 since one python manage.py runserver
is responsible of serving N request/response).This sounds to me like long pooling and lets suppose you want to measure how long the app has been running, because you might be counting the seconds since it start until it ends, normally those cases are handled with other strategies like analyzing the app logs, but anyways if this is the case, you will have a busy celery worker all the time running. Normally this is a possible wrong pattern but the use case is the use case, so, the entry point to the web app is the wsgi file and the exit point is the system's process SIGINT
signal, read this question.
My intention is not to confuse, is to point that this question is able to be interpreted of many ways, anyways, each one has an answer.
Upvotes: 1
Reputation: 1120
You can run celery as background job as described here. Also while deploying, you will have to run two pods (on one system celery will run and on another django application) and it has to be configured that restart happens once anything is deployed. This way if your background jobs are eating up more memory or anything it should not effect application.
For running a task in parallel with your application, you can run a schedule job which checks for an event, say every minute and does the job if its required.
Upvotes: 0