Reputation: 135
I am in middle of running a sidekiq job and let's say for some reason an error occurs. For that particular error we don't want to update the retry_count of the sidekiq job but want to trigger the retry. Is there any particular way to do it?.
I tried deleting one of the job and modifying the items to not update retry queue and push it again. However it causes inconsistency as when sidekiq realises there was a error the deleted job comes up with updated retry count.
I am doing all this in middleware as there is where sidekiq properties are accessible.
def call(worker, item, queue)
begin
job = get_job_from_sidekiq(item.queue)
# say some error occurs
rescue HandleThisError
job["retry_count"] = [msg["retry_count"].to_i - 1,0].max
end
end
Basically avoiding the retry count to increase. This doesn't seem to be working do we have any work around for it?
Upvotes: 0
Views: 1061
Reputation: 4686
Sidekiq gives you global and worker level options for setting retry behavior. The best on-the-rails approach would be to work within these options.
Editing the details of a specific job will require updating those details in Redis, which is doable manually if you know how, but that's really what Sidekiq is doing for you and personally, I'm not sure I'd take this approach.
Based on your comment, the only way I can think of to accomplish this without touching Redis would be to intervene just before Sidekiq writes job details back to Redis for the next retry. You could change the retry_count
value just before Sidekiq writes it to Redis. You'd probably be hacking the gem, and probably during requeue
.
Note that Sidekiq also has a bulk_requeue
, so there are at least two ways your job could be sent to Redis for storage.
Upvotes: 1