Reputation: 31
I am using Celery on centos server for 10 months.
(there was no problems like memory, cpu io etc for 10 month.)
Server environment
- CentOS 7.3 >> 2 servers (VM)
- celery 4 with 8 concurrency
- RabbitMQ 3.6.10 / erlang R16B03-1
- python3 - flask
- CELERYD_PREFETCH_MULTIFLIER // not use it. (default value)
- @celery.tasks options : acks_late=False, ignore_result=True
My service environment
- service is on two servers.
- Generally once celery workers start to run, it takes time almost 16 hours.
- one message takes about 2 seconds with 8 concurrency > totally takes about 16 hours on two servers.
- Celery is running steadily.
But problems..
the day before yesterday, memory is increased rapidly at moment. (Only server A)
I don't care about it because the celery running time was too short.
finally yesterday, server memory is increased rapidly. (Only server B)
server B's actual memory status (for 16 hours of service and after..)
10:00 am // 10%
11:00 am // 23% , service is running. normal status
14:30 pm // 25% , memory increasing start
16:00 pm // 91%
17:30 pm // 89% , add memory on server B (VM)
04:00 am // 90% , celery job is done. but not still released.
09:00 am // 10% , forcibly restart celery
something strange
1) I don't change my application or any config, but also servers too.
2) If my application settings is wrong, the problem should occurs on two servers at the same time. Isn't it? (But When I restart celery, memory is released... so it seems celery problem...)
It's too hard to find cause ;~( Do you have any hints?
Upvotes: 3
Views: 2709
Reputation: 5713
This is an issue with celery 4.2 and and rabbitmq. It has been added to the milestone of celery 4.3 release with issue #4843.
One of the solutions until the new celery version is released, is to switch to redis. Or else downgrade the celery version to 3.1.25 which is working fine.
Upvotes: 2