user416
user416

Reputation: 536

How to schedule SSIS jobs without Deadlock

We have recently faced an issue with Sql server deadlock in Client Machine . we suspect this is due to scheduling the Job A, Job B , Job C at frequent interval.

a. Job A – every 15 minutes b. Job B – every 20 minutes c. Job C –every 30 minutes

These intervals will not vary based on data volume and this leads to overlapping execution of the jobs. Also if there is an manual intervention then how can we make sure the job is completed its pending task?

1.Is there a way to create dependency job?

So that we can make Job B to wait for Job A process completion , Job C to wait for Job B process completion.

2.How to handle the worst case scenario, if job fails then how to revert back the transaction?

3.Is there a way to write and track custom log files/email alerts in SSIS jobs status?

Thanks in advance

Upvotes: 1

Views: 693

Answers (2)

Nick.Mc
Nick.Mc

Reputation: 19184

I'm currently handling this by putting this as the first step in the job:

-- Check running SQL Agent jobs. If there are any 'Load' jobs then don't run this one.
-- This step has 3 retries at 5 minutes each
IF EXISTS 
(
SELECT sj.name, sja.*
FROM msdb.dbo.sysjobactivity AS sja
INNER JOIN msdb.dbo.sysjobs AS sj ON sja.job_id = sj.job_id
WHERE sja.start_execution_date IS NOT NULL
AND sja.stop_execution_date IS NULL
AND sj.name LIKE 'Load%'
)
  RAISERROR('Job is still currently running. Cannot run this job in parallel. This step will retry.',18,0)

Then in the advanced tab you set the step to retry how many times at whatever interval. If another job is already running (identified by a Job Name starting with Load in this example, it will rasie an error and retry.

The other option to avoid deadlocks is:

  1. Optimise your querties so they don't take so long
  2. Add SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED; to the start of your query to reduce contention.... but you should UNDERSTAND what this does first.

Upvotes: 1

thursdaysgeek
thursdaysgeek

Reputation: 7926

It's a bit kludgy, but each job can have a flag file that they clear when they start, and write to when they end. The next job needs to check for the flag file of the previous job before starting. And of course, if the job tries to start several times and the flag file isn't there, it sends an email that it's stymied, so a human can sort things out and get things started going properly again.

Reverting on a failure would be separate, and would depend on how you solved the first issue.

You can write specific items to a log as part of one process, and have another SSIS package monitoring that, and sending an email if a certain text is encountered. That log could also be used instead of flag files, but you need to know that the process A completed is the current one, not the one from 15 minutes before.

Upvotes: 0

Related Questions