Reputation: 636
Background: I am using Hangfire with .net web application to run recurring job, which is scheduled for 3rd Saturday of every month. But as per the need client can trigger that job manually from hangfire dashboard, this job is long running job some time it might take few hours to complete.
Issue:
When this job is getting triggered manually sometimes it is automatically getting called 2 time, not sure why.
Does any one has idea on this issue?
To get rid of this issue I saw many people suggesting DisableConcurrentExecution attribute, as this attribute ask for timeout seconds I am doubtful whether it will work with long running job.
Is there any way to avoid calling long running job if one is already started?
Upvotes: 2
Views: 4004
Reputation: 1
I spent days on this until I stumbled on this elegant answer by Dev Superman found here Hangfire - Prevent multiples of the same job being enqueued
I've written the following filters to prevent duplication of scheduled and recurring jobs, for Hangfire 1.8.12:
public class PreventConcurrentScheduledJobFilter : JobFilterAttribute, IClientFilter, IServerFilter
{
public void OnCreating(CreatingContext filterContext)
{
var jobs = JobStorage.Current.GetMonitoringApi().ScheduledJobs(0, 100);
if (jobs.Count(x => x.Value.Job.Type == filterContext.Job.Type && string.Join(".", x.Value.Job.Args) == string.Join(".", filterContext.Job.Args)) > 0)
{
filterContext.Canceled = true;
}
}
public void OnPerformed(PerformedContext filterContext) { }
void IClientFilter.OnCreated(CreatedContext filterContext) { }
void IServerFilter.OnPerforming(PerformingContext filterContext) { }
}
public class PreventConcurrentRecurringJobFilter : JobFilterAttribute, IClientFilter, IServerFilter
{
public void OnCreating(CreatingContext filterContext)
{
var jobs = JobStorage.Current.GetMonitoringApi().ProcessingJobs(0, 100);
if (jobs.Count(x => x.Value.Job.Type == filterContext.Job.Type && string.Join(".", x.Value.Job.Args) == string.Join(".", filterContext.Job.Args)) > 0)
{
filterContext.Canceled = true;
}
}
public void OnPerformed(PerformedContext filterContext) { }
void IClientFilter.OnCreated(CreatedContext filterContext) { }
void IServerFilter.OnPerforming(PerformingContext filterContext) { }
}
Note that the difference between these filters is GetMonitoringApi().ProcessingJobs
for recurring jobs, and GetMonitoringApi().ScheduledJobs
for scheduled jobs. You can similarly easily write filters for other job types.
Similarly you can change (0, 100) to whatever number of jobs you want to search. For example if you know that your recurringjobs will never exceed 20 you will use (0, 20).
Usage examples:
[PreventConcurrentRecurringJobFilter]
public async Task MyRecurringJobTask()
{
...
}
[PreventConcurrentScheduledJobFilter]
public async Task MyScheduledJobTask()
{
...
}
Upvotes: 0
Reputation: 9818
In order to get around this in the end, i did the following
These attributes are added to my interfaces
[PingUrlToKeepAlive]
[SkipWhenPreviousJobIsRunning]
[DisableConcurrentExecution(10)]
[Queue("{0}")]
PingUrl is an attribute created to stop the IIS process from shutting down after 20 minutes on either server, nothing to do with this fix, just thought i would mention it
Queue is the recommended way now according to hangfire.
DisableConcurrentExecution is the attribute i thought i needed only, but you also need the one below.
SkipWhenPreviousJobIsRunning is a new attribute, that looks like this
public class SkipWhenPreviousJobIsRunningAttribute: JobFilterAttribute, IClientFilter, IApplyStateFilter
{
public void OnCreating(CreatingContext context)
{
var connection = context.Connection as JobStorageConnection;
// We can't handle old storages
if (connection == null) return;
// We should run this filter only for background jobs based on
// recurring ones
if (!context.Parameters.ContainsKey("RecurringJobId")) return;
var recurringJobId = context.Parameters["RecurringJobId"] as string;
// RecurringJobId is malformed. This should not happen, but anyway.
if (string.IsNullOrWhiteSpace(recurringJobId)) return;
var running = connection.GetValueFromHash($"recurring-job:{recurringJobId}", "Running");
if ("yes".Equals(running, StringComparison.OrdinalIgnoreCase))
{
context.Canceled = true;
}
}
public void OnCreated(CreatedContext filterContext)
{
}
public void OnStateApplied(ApplyStateContext context, IWriteOnlyTransaction transaction)
{
if (context.NewState is EnqueuedState)
{
var recurringJobId = SerializationHelper.Deserialize<string>(context.Connection.GetJobParameter(context.BackgroundJob.Id, "RecurringJobId"));
if (string.IsNullOrWhiteSpace(recurringJobId)) return;
transaction.SetRangeInHash(
$"recurring-job:{recurringJobId}",
new[] { new KeyValuePair<string, string>("Running", "yes") });
}
else if (context.NewState.IsFinal /* || context.NewState is FailedState*/)
{
var recurringJobId = SerializationHelper.Deserialize<string>(context.Connection.GetJobParameter(context.BackgroundJob.Id, "RecurringJobId"));
if (string.IsNullOrWhiteSpace(recurringJobId)) return;
transaction.SetRangeInHash(
$"recurring-job:{recurringJobId}",
new[] { new KeyValuePair<string, string>("Running", "no") });
}
}
public void OnStateUnapplied(ApplyStateContext context, IWriteOnlyTransaction transaction)
{
}
}
Basically this checks to see if the job is already running and if so, cancels it. We now have no problems with jobs running on both servers at the same time.
The above works for recuring jobs, but you can change it easily to work for all jobs.
Upvotes: 3