Reputation: 31
Celery 4.4.4, Python3. I have created a custom request and task that does nothing but catch on_failure errors due to workerLost. When I kill a running task, the on_failure does execute. This allows me to log.
I also want to retry the task. Request.task.retry() fails. Apparently the task must be re-launched entirely and since this is usually a chord callback or chord dependency, I have not been successful...However, request.execute() is successful and I don't understand why its different.
I would like to potentially relaunch a failed task to a different queue (lets say 'bigqueue' that could have higher request and limits than normal.
Looking for help on best way to retry this thing and ideally routing it to different queue, while maintaining current context and expiration info. Is the request.execute() is a bad idea and why, and how using this context I might be able to change its _context.deliver_info['routing_key']
Upvotes: 0
Views: 62