Reputation: 113
I've read dozens of articles about why this is a bad idea. The arguments against using a detached database as a backup are legion. However, I still think that in my case, it makes good sense. Realizing that those are often the words of admins who know too little, I'd like to put my strategy to the good folks here to see if someone can tell me why what I'm doing would be more properly done with the internal backup mechanism.
Let me address some of the common issues.
The total size of the DB is about 1TB. My main reason for detaching and attaching instead of using backups is performance. In my testing, it is significantly faster to detach the database, make a copy of the underlying files, and attach the original files again than it is to perform a backup. During recovery, it is also significantly faster to attach files (even if I have to copy them to the proper location first) than it is to restore them.
I can get around the backup performance problem by using something other than a full backup, but that doesn't help when it comes to restoration. In the event of a disaster, I need to be back up and running quickly. My application can handle small amounts of downtime periodically, but any large stretch of downtime is disastrous. Restoring 1TB of database takes longer than the business wishes to tolerate for the application.
The last item I've often read is that detaching a database comes with some risk. Just like we ought to perform test restores of backups, I immediately attach any copied MDF/LDF/NDF files to a disaster recovery SQL Server to make sure the copy is good. I suppose my exposure here is that I could detach the database and break something such that the original DB files can no longer be re-attached. Honestly, I've never seen this, so if this is really a possibility, I feel that it's quite remote. I'm doing this nightly, so I'd lose a day's worth of reporting data in this (unlikely?) scenario.
Am I missing anything else?
Upvotes: 1
Views: 442
Reputation: 25526
With this approach you are choosing to prioritise reduced recovery time over recovery point (data lost when you restore). That's a reasonable trade off that everyone has to make to some degree.
Your database will be offline every time you detach and re-attach to do a backup whereas the BACKUP command requires no downtime at all. It seems unusual to be more concerned about downtime during occasional restores than every day during backups but I guess that depends on the actual timing of the backups and the hypothetical timing of the restores.
You haven't mentioned transaction log backups. For most people log backups give the optimal recovery time and recovery point. Have you considered relatively infrequent log backups as an alternative strategy?
If recovery time is such a priority then you'll presumably need to have warm standby hardware that you can restore to. If you do have a standby server then you could use standard backup and restore to minimise downtime much more than you can by using the detach method: just restore your database to the standby server every day. You could even log-ship your transaction log backups.
As with any backup and restore strategy you should test it out. Do a trial run and see just how much losing a day's work actually costs. Maybe it's easy to underestimate that cost.
When you detach and re-attach make sure you include the log file(s). Keep an offline copy in addition to the copy that you attach. If an attach happens to fail (say because one of the files has moved) then in some cases the files may get "touched" so that they cannot easily be attached again. My advice would be never to try attaching from your only copy of the database files.
You will still need to use BACKUP to backup the Master database (and model/msdb if required).
Upvotes: 3