Aida
Aida

Reputation: 2424

Why celery worker puts a task in PENDING state for so long?

I have a celery worker that runs tasks.py as below:

from celery import Celery
from kombu import Connection, Exchange, Queue, Consumer
import socket


app = Celery('tasks', backend='redis://', broker='pyamqp://guest:guest@localhost/')
app.conf.task_default_queue = 'default'
app.conf.task_queues = (
    Queue('queue1',    routing_key='tasks.add'),
    Queue('queueA',    routing_key='tasks.task_1'),
    Queue('queueB',    routing_key='tasks.task_2'),
    Queue('queueC',    routing_key='tasks.task_3'),
    Queue('queueD',    routing_key='tasks.task_4')
)


@app.task
def add(x, y):
    print ("add("+ str(x) + "+" + str(y) + ")")
    return x + y

And a tasks_canvas.py that creates a chain of tasks as below:

from celery import signature

from celery import chain
from tasks import *

signature('tasks.add', args=(2,2))

result = chain(add.s(2,2), add.s(4), add.s(8)).apply_async(queue='queue1')

print (result.status)
print (result.get())

But, when running tasks_canvas.py, the result.status is always PENDING and the worker never runs the whole chain. Here is the output of running tasks_canvas.py:

C:\Users\user_\Desktop\Aida>tasks_canvas.py
PENDING

And, here is the output of the worker:

C:\Users\user_\Desktop\Aida>celery -A tasks worker -l info -P eventlet

 -------------- celery@User-RazerBlade v4.2.0 (windowlicker)
---- **** -----
--- * ***  * -- Windows-10-10.0.17134-SP0 2018-07-16 12:04:20
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app:         tasks:0x41d5390
- ** ---------- .> transport:   amqp://guest:**@localhost:5672//
- ** ---------- .> results:     redis://
- *** --- * --- .> concurrency: 4 (eventlet)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
 -------------- [queues]
                .> queue1           exchange=(direct) key=tasks.add
                .> queueA           exchange=(direct) key=tasks.task_1
                .> queueB           exchange=(direct) key=tasks.task_2
                .> queueC           exchange=(direct) key=tasks.task_3
                .> queueD           exchange=(direct) key=tasks.task_4

[tasks]
  . tasks.add
  . tasks.task_1
  . tasks.task_2
  . tasks.task_3
  . tasks.task_4
  . tasks.task_5

[2018-07-16 12:04:20,334: INFO/MainProcess] Connected to amqp://guest:**@127.0.0.1:5672//
[2018-07-16 12:04:20,351: INFO/MainProcess] mingle: searching for neighbors
[2018-07-16 12:04:21,394: INFO/MainProcess] mingle: all alone
[2018-07-16 12:04:21,443: INFO/MainProcess] celery@User-RazerBlade ready.
[2018-07-16 12:04:21,448: INFO/MainProcess] pidbox: Connected to amqp://guest:**@127.0.0.1:5672//.
[2018-07-16 12:04:23,101: INFO/MainProcess] Received task: tasks.add[e6306b5b-211f-4015-b57e-05e2d0ac2df2]
[2018-07-16 12:04:23,102: WARNING/MainProcess] add(2+2)
[2018-07-16 12:04:23,128: INFO/MainProcess] Task tasks.add[e6306b5b-211f-4015-b57e-05e2d0ac2df2] succeeded in 0.031000000000858563s: 4

I'd like to know why this happens as I am new to celery, and how I can make the worker to run the whole chain?

Upvotes: 3

Views: 3571

Answers (1)

Aida
Aida

Reputation: 2424

I resolved the problem. There is a guide here on why a task is always in the pending state. But, it does not cover all cases. In my case there was a tasks routing problem. When I use the default queue all tasks in the chain run right away.

Upvotes: 2

Related Questions