Reputation: 19618
I use Celery with RabbitMQ in my Django app (on Elastic Beanstalk) to manage background tasks and I daemonized it using Supervisor. The problem now, is that one of the period task that I defined is failing (after a week in which it worked properly), the error I've got is:
[01/Apr/2014 23:04:03] [ERROR] [celery.worker.job:272] Task clean-dead-sessions[1bfb5a0a-7914-4623-8b5b-35fc68443d2e] raised unexpected: WorkerLostError('Worker exited prematurely: signal 9 (SIGKILL).',)
Traceback (most recent call last):
File "/opt/python/run/venv/lib/python2.7/site-packages/billiard/pool.py", line 1168, in mark_as_worker_lost
human_status(exitcode)),
WorkerLostError: Worker exited prematurely: signal 9 (SIGKILL).
All the processes managed by supervisor are up and running properly (supervisorctl status
says RUNNNING).
I tried to read several logs on my ec2 instance but no one seems to help me in finding out what is the cause of the SIGKILL
. What should I do? How can I investigate?
These are my celery settings:
CELERY_TIMEZONE = 'UTC'
CELERY_TASK_SERIALIZER = 'json'
CELERY_ACCEPT_CONTENT = ['json']
BROKER_URL = os.environ['RABBITMQ_URL']
CELERY_IGNORE_RESULT = True
CELERY_DISABLE_RATE_LIMITS = False
CELERYD_HIJACK_ROOT_LOGGER = False
And this is my supervisord.conf:
[program:celery_worker]
environment=$env_variables
directory=/opt/python/current/app
command=/opt/python/run/venv/bin/celery worker -A com.cygora -l info --pidfile=/opt/python/run/celery_worker.pid
startsecs=10
stopwaitsecs=60
stopasgroup=true
killasgroup=true
autostart=true
autorestart=true
stdout_logfile=/opt/python/log/celery_worker.stdout.log
stdout_logfile_maxbytes=5MB
stdout_logfile_backups=10
stderr_logfile=/opt/python/log/celery_worker.stderr.log
stderr_logfile_maxbytes=5MB
stderr_logfile_backups=10
numprocs=1
[program:celery_beat]
environment=$env_variables
directory=/opt/python/current/app
command=/opt/python/run/venv/bin/celery beat -A com.cygora -l info --pidfile=/opt/python/run/celery_beat.pid --schedule=/opt/python/run/celery_beat_schedule
startsecs=10
stopwaitsecs=300
stopasgroup=true
killasgroup=true
autostart=false
autorestart=true
stdout_logfile=/opt/python/log/celery_beat.stdout.log
stdout_logfile_maxbytes=5MB
stdout_logfile_backups=10
stderr_logfile=/opt/python/log/celery_beat.stderr.log
stderr_logfile_maxbytes=5MB
stderr_logfile_backups=10
numprocs=1
After restarting celery beat the problem remains.
Changed killasgroup=true
to killasgroup=false
and the problem remains.
Upvotes: 77
Views: 83631
Reputation: 3170
This error occurs when an asynchronous task (e.g. using Celery) or the script you are using consumes a large amount of memory due to a memory leak.
In my case, I was getting data from another system and saving it on a variable to export all the data (into a Django model / Excel file) after finishing the process.
Here is the catch. My script was gathering 10 Million data; it was leaking memory while I was gathering data. This resulted in the raised Exception.
To overcome the issue, I divided 10 million pieces of data into 20 parts (half a million on each part). I stored the data in my preferred local file / Django model every time the data length reached 500,000 items, repeating this for every batch of 500k items.
There is no need to do the exact number of partitions. It is the idea of solving a complex problem by splitting it into multiple subproblems and solving them one by one. :D
Upvotes: 13
Reputation: 370
Our cause was a (synchronous) task that was downloading Zipped JSON file and unpacking/iterating in a memory on a newly migrated server.
The solution was to enable SWAP partition but optimization (json-stream) and monitoring (Prometheus+Grafana) is highly recommended.
Upvotes: 0
Reputation: 320
OOM Killer was also the case in our setup with the ECS EC2 cluster. We used Datadog for monitoring, and you can search for messages like this from the aws-ecs-agent:
level=info time=2024-04-03T14:36:38Z msg="DockerGoClient: process within container <container_id> (name: \"ecs-production-core-worker-393-production-core-worker-a89898f398d0c4fd3c00\") died due to OOM" module=docker_client.go
In our case it happened during SQLAlchemy querying the Postgres DB, serialization took so much memory we had clear spikes when certain celery task was executed.
Check memory spikes below:
The solution was easy, do the whole work in chunks! In our case, we didn't predict that big data increase for one client, and it started happening out of a sudden!
Upvotes: 1
Reputation: 2903
The SIGKILL your worker received was initiated by another process. Your supervisord config looks fine, and the killasgroup would only affect a supervisor initiated kill (e.g. the ctl or a plugin) - and without that setting it would have sent the signal to the dispatcher anyway, not the child.
Most likely you have a memory leak and the OS's oomkiller is assassinating your process for bad behavior.
grep oom /var/log/messages
. If you see messages, that's your problem.
If you don't find anything, try running the periodic process manually in a shell:
MyPeriodicTask().run()
And see what happens. I'd monitor system and process metrics from top in another terminal, if you don't have good instrumentation like cactus, ganglia, etc for this host.
Upvotes: 76