Reputation: 9444
(Using collectiveidea's delayed_job)
I have a job that scrapes for a keyword, but I want the job to destroy itself if the keyword has since been deleted since it was enqueued (i.e. a user deletes one of his keywords).
class ScrapingJob < Struct.new(:keyword_id)
def perform
keyword = Keyword.find(keyword_id)
data = keyword.scrape
keyword.details.create!(:text => data[:text])
end
end
I was trying to put it in DJ's before hook by moving the keyword lookup into something like:
def before(job)
# If keyword doesn't exist, destroy job
begin
@keyword = Keyword.find(keyword_id)
rescue 'RecordNotFound'
self.destroy
end
end
The job fails, so DJ keeps re-attempting this job until it hits whatever retry cap I have specified.
Here's the failure:
Keyword Load (0.4ms) SELECT "keywords".* FROM "keywords"
WHERE ("keywords"."id" = 292929) LIMIT 1
AREL (1.1ms) UPDATE "delayed_jobs"
SET "last_error" = '{Couldn''t find Keyword with ID=292929
...
...
I want DJ to just destroy the job as soon as it sees that the keyword doesn't exist, bypassing the whole retry system.
Upvotes: 1
Views: 1554
Reputation: 46
Just have it silently fail without an exception and the job will be gone.
def perform if keyword = Keyword.find_by_id(keyword_id) data = keyword.scrape keyword.details.create!(:text => data[:text]) end end
I changed find() to find_by_id() so it won't raise an exception but alternatively you could rescue.
This way, the job just doesn't do anything if the keyword is gone. A job that doesn't raise an exception just goes away.
We use this pattern quite a bit at Collective Idea.
Upvotes: 3
Reputation: 7473
Here's an easy solution by avoiding the exception:
class ScrapingJob < Struct.new(:keyword_id)
def perform
keyword = Keyword.find_by_id(keyword_id)
unless keyword.nil?
data = keyword.scrape
keyword.details.create!(:text => data[:text])
end
end
end
Upvotes: 1