CidoPostan
CidoPostan

Reputation: 31

Using System.Timers.Timer Elapsed with DBContext never get updated

Hi I am using the following code with System.Timers.Timer but the event triggered by the Elapsed event method uses my DBContext injected by the DI and in the background the record is updated in the database but the DBContext doesn't seem to be getting the latest change.

Do I need to configure DBContext to not cache or something else?

private System.Timers.Timer aTimer;
private Stopwatch stopWatch;

//MAIN CODE
aTimer = new System.Timers.Timer
{
    Interval = _settings.RetryIntervalsInSeconds * 1000
};
stopWatch = Stopwatch.StartNew();
// Hook up the Elapsed event for the timer. 
aTimer.Elapsed += async (sender, e) =>
    await VerifyRecordIsUpdatedAsync(sender, e, record.Id);

// Have the timer fire repeated events (true is the default)
aTimer.AutoReset = true;

// Start the timer
aTimer.Start();

while (aTimer.Enabled)
{
    var timeRetryExecuting = stopWatch.Elapsed;
    if (timeRetryExecuting.Seconds > _settings.TimeoutWaitingInSeconds)
    {
        aTimer.Stop();
    }
}

// EVENT METHOD
private async Task VerifyRecordIsUpdatedAsync(object sender, 
    ElapsedEventArgs e, Guid recordId)
{
    var record = await _context.Records.FirstOrDefaultAsync(x => x.Id == recordId);
    var timeRetryExecuting = stopWatch.Elapsed;
    if (record.Status != Status.Pending
        || timeRetryExecuting.Seconds > _settings.TimeoutWaitingInSeconds)
    {
        aTimer.Stop();
        stopWatch.Stop();
    }
}

Upvotes: 0

Views: 656

Answers (1)

Steve Py
Steve Py

Reputation: 34968

Since you are not using a synchronizing object for your timer, each Elapsed event will be handled on a ThreadPool thread. This can lead to issues, and yes the DbContext instance will be cached.

If other code is modifying data on that instance of a Record, the timer should see that change, but you could end up with cross-thread exceptions which likely would just be silently eaten. If this is something like a Windows application you would want to use a syncronizing object to ensure the event handler ran on the main UI thread.

Alternatively you could inject something like a DbContextFactory for this event handler to use rather than accessing an injected DbContext instance. This way you could have something like this in your event handler:

private async Task VerifyRecordIsUpdatedAsync(object sender, 
    ElapsedEventArgs e, Guid recordId)
{
    var timer = (Timer)sender ?? throw new ArgumentException("sender was not recognized as a Timer");
    using (var context = _dbContextFactory.Create())
    {
        var record = await context.Records
            .Where(x => x.Id == recordId)
            .Select(x => x.Status)
            .SingleAsync();
        var timeRetryExecuting = stopWatch.Elapsed;
        if (recordStatus != Status.Pending
            || timeRetryExecuting.Seconds > _settings.TimeoutWaitingInSeconds)
        {
            timer.Stop();
        }
   }
}

A couple notables here. If you just need the status for the current record, it can be more efficient to just request that field for the given record. Avoid using FirstOrDefault as a default for fetching a single record, and if you use a *OrDefault you should be expecting and handling the possibility of no record coming back rather than having a NullReferenceException happen some time later when an object or such might be referenced.

Rather than accessing module level references from within event handlers (across threads) it is generally better to access the appropriate sender via the passed in argument. All it takes is for some code to change the "aTimer" reference to have a Timer instance event handler to get detached, stopping a different timer instance that the one aTimer now references, leaving that original timer reference ticking as well. The Stopwatch Stop should occur after the Timer is Stopped, within the scope of the thread that created it.

One last consideration would be to consider avoiding the AutoReset behaviour, and instead use a single Expired event with a restart on completion. There are two issues with AutoReset timers. Firstly, if an exception happens and the timer is operating on a ThreadPool thread, this often leaves timers running and silently failing repeatedly where it's arguably better to have the timer stop. The second issue is that under load a timer's interval operation could overrun the interval which could lead to a compounding load issue if that operation is using limited resources like database connections. A better pattern would be setting the AutoReset to False, then "starting" the timer again after an interval completes:

private async Task VerifyRecordIsUpdatedAsync(object sender, 
    ElapsedEventArgs e, Guid recordId)
{
    var timer = (Timer)sender ?? throw new ArgumentException("sender was not recognized as a Timer");
    using (var context = _dbContextFactory.Create())
    {
        var record = await context.Records
            .Where(x => x.Id == recordId)
            .Select(x => x.Status)
            .SingleAsync();
        var timeRetryExecuting = stopWatch.Elapsed;
        if (recordStatus == Status.Pending
            && timeRetryExecuting.Seconds <= _settings.TimeoutWaitingInSeconds)
        {
            timer.Start();
        }
   }
}

This involves reversing your logic check, and instead of calling Stop() on the timer when a certain condition is met, you call Start() to continue the intervals until that condition is met. AutoReset is better for ensuring a particular interval is observed, but you have to guarantee that the interval is never going to be overrun or risk a cascading issue. Non-auto-resetting timers are better for ensuring that a minimum interval is respected, but can allow the interval handler the time to complete. The time spent in the event handler can itself be measured and used to update the next interval if something like an "every X seconds" is important where if it takes 2 seconds, it sets the next interval to 3 seconds, or if it takes 6 seconds it runs immediately or waits for the next 10 second mark.

Update: Caveats of this approach. The DbContextFactory can work side-by-side with the main registered DbContext instance(s). These explicitly scoped DbContexts can serve in places such as thread workers or places where you need a "clean" DbContext instance (free of dirty changes that may block a SaveChanges call due to an exception, or commit changes before you are ready... For example, a call to write a log entry if using a single DbContext definition so you can write a log entry independently of other changes the main DbContext is tracking) However, it is important to consider that any entities loaded by a DbContext provided by the factory do not live beyond the lifetime scope of that DbContext. (I.e. not referenced outside of using {} block, "returned" to a caller, etc. These instances' DbContext reference will have been disposed, and it is possible that the main DbContext could already have tracked references for the same record.

This also means that while entities read by DbContextFactory build DbContexts will see the current saved state of data for a particular entity, they will not see current unsaved state for entities currently tracked by other DbContexts that may be getting modified. It also means that entity instances that might already be tracked by other DbContexts will not see changes you save as part of these Factory-scoped DbContexts until they are reloaded. (i.e. _context.Entry(someEntity).Reload()) Ideally if you are using a main IoC scoped DbContext in combination with a DbContextFactory, the operations of the factory-scoped DbContexts should be limited to Read, or completely isolated operations that the entities managed by the main DbContext instance doesn't need to be aware of or dependent on.

If you do need to update tracked entities as part of something like a timer event then you would be better off using a Synchronization Object for that specific timer to ensure the event handler runs on the originating thread and using the main "_context" instance.

Upvotes: 1

Related Questions