Reputation:
Recently, we faced the major production issues because one of our team member wrote the wrong code in SQL joins.
we have 50 database in SQL Server.
Do we create the server level trigger so in case of any database table records changes(Update,Delete) we received the alert mail?
Note: I am taking about server level trigger not a individual table level trigger
Upvotes: 0
Views: 819
Reputation: 3935
(It seems like the OP is asking for advice on which solution to implement regarding failures in production releases - sounds like there was a spectacular blow up.)
The answer is, it depends. Stack Overflow is better geared towards solving specific problems, rather than providing advice and training (i.e., provide us code that is giving you a problem). However, I will at least try to give you a few things to consider in order to help you on your way. Perhaps, these answers will allow you to better articulate what problem you are trying to solve in the future.
Solution 1: Create UPDATE/DELETE triggers to preserve changes to the database so that corrupted data can be easily restored from a sister table (rather than doing a DB restore). For this, see How to Create Trigger to Keep Track of Last Changed Data . I don't like this idea because you are duplicating data and can result in a data maintenance nightmare.
Solution 2: Implement auditing throughout the entire design process by adding Create (for INSERTs) and Update columns, two for each that store timestamps and usernames (4 columns total), to all of your tables and databases. This seems unlikely due to the fact that your firm already has 50 databases with an unknown number of tables and jobs (note: the jobs would need to be updated to reflect this change and re-deployed).
However, it is a good principle to follow because you will be able to track when, and by whom, changes were made to the system without having to duplicate your data (solution 1). This makes tracking down problems and identifying bad data a simple task. Once the bug is fixed, you can re-run the job for the time period that covers all timestamps with bad data so that the data can be fixed. This also serves as a good (perfect?) test case to validate the bug fix. You can take this a step further by adding a separate auditing table that all jobs report to with respect to job name, run times, process dates, file names, folder paths, record counts (INSERT, UPDATE, DELETE, ERROR, BYPASS), etc.
Then, implement foreign keys to your fact tables for easy cross-reference. Keep in mind that you can normalize this task with dimension tables. This is a lot of work from a design and development standpoint, but it will save both the DBA and developer their time and sanity when trying to track what the system does by providing a clear and concise table driven record of the who, what, where, when, and why - as opposed to digging through the logs, or even worse, failure to identify the problem within the current data model.
There are probably more solutions that could follow this, but the above two encapsulate the entire realm of solutions that I can think of, which is this: there is no optimal solution for fixing poor design, coding, and regression testing. You can either put a band aide on it with triggers, or implement a design methodology that can provide some auditing on your tables.
Upvotes: 1