Reputation: 3132
I am using http://python-rq.org/ to queue and execute tasks on Heroku worker dynos. These are long-running tasks and occasionally I need to cancel them in mid-execution. How do I do that from Python?
from redis import Redis
from rq import Queue
from my_module import count_words_at_url
q = Queue(connection=Redis())
result = q.enqueue(
count_words_at_url, 'http://nvie.com')
and later in a separate process I want to do:
from redis import Redis
from rq import Queue
from my_module import count_words_at_url
q = Queue(connection=Redis())
result = q.revoke_all() # or something
Thanks!
Upvotes: 12
Views: 7693
Reputation: 476
From the docs:
You can use
send_stop_job_command()
to tell a worker to immediately stop a currently executing job. A job that’s stopped will be sent to FailedJobRegistry.
from redis import Redis
from rq.command import send_stop_job_command
redis = Redis()
send_stop_job_command(redis, job_id)
Upvotes: 2
Reputation: 1396
I think the most common solution is to have the worker spawn another thread/process to do the actual work, and then periodically check the job metadata. To kill the task, set a flag in the metadata and then have the worker kill the running thread/process.
Upvotes: 4
Reputation: 52233
If you have the job instance at hand simply
job.cancel()
Or if you can determine the hash:
from rq import cancel_job
cancel_job('2eafc1e6-48c2-464b-a0ff-88fd199d039c')
But that just removes it from the queue; I don't know that it will kill it if already executing.
You could have it log the wall time then check itself periodically and raise an exception/self-destruct after a period of time.
For manual, ad-hoc style, death: If you have redis-cli
installed you can do something drastic like flushall queues and jobs:
$ redis-cli
127.0.0.1:6379> flushall
OK
127.0.0.1:6379> exit
I'm still digging around the documentation to try and find how to make a precision kill.
Not sure if that helps anyone since the question is already 18 months old.
Upvotes: 13