Amandeep Singh
Amandeep Singh

Reputation: 16

Sql Server Setting max file size lead to faile transactions

I have a database with log file size of 700 MB. Now I have fixed its max file size to 1 GB. When it reaches 1 GB,transaction failed the reason for the same is that "The transaction log for database is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases"

This is same case if I uncheck Autogrowth for log file.

When I checked log_reuse_wait_desc column in sys.databases it says "Active_Transaction".

I am not able to understand why Sql server is not maintaining the max file size limit.Why it cannot delete old logs or something like that to maintain the max file size. How does it work.

What I want is to limit the log file size that not to exceed 1 GB in any case.

Upvotes: 0

Views: 2175

Answers (1)

DimUser
DimUser

Reputation: 270

There are a few things you need to consider here, especially if you want to restrict the log file size to 1GB.

  1. as already mentioned, you need to understand the difference between the three recovery models. Taking log backups is a key task when using the full recovery model. However this is only part of the problem; log backups only truncate the inactive part of the log, therefore a transaction could fill the log file with 1GB+ of data, and then you are in the same position you are in now... even if you are in simple recovery model (a log backup will not help you here!).
  2. In an ideal situation, you would not restrict the log file in such a way, because of this problem. if possible you want to allow it to auto-grow so, in theory, it could fill the disk.
  3. Transaction log management is a science in itself. Kimberly Tripp has some very good advice on how to manage transaction log throughput here
  4. understanding VLF's will allow you to better manage your transaction log, and could help towards better proportioning your log file for large transactions.
  5. If, after everything you have read, you are still required to limit the transaction log growth, then you will need to consider batch updating large result sets. this will allow you to update, say, 1000 rows at a time, meaning that only 1000 records are written to the log. SQL Server uses write-ahead logging, so in order to complete a transaction, you first need to have enough space in the transaction log to write all the details. If using a simple recovery model, this write-ahead logging is automatically truncated, meaning you don't need to backup the log. Therefore, writing 1000 records at a time (for example) will causing less of a problem than a huge 1,000,000 record insert (say)

Redgate provide a free e-book to help you on your way!

EDIT: P.s. I've just read your comment above... If you are in full recovery model you MUST do log backups, otherwise sql server WILL NOT recover the space from the log, and will continue to write to the log causing it to expand! However note, you MUST have a full backup for transaction log backups to take effect. SQL Server cannot backup the log if it doesn't have an initial restore point (i.e. the full backup).

Upvotes: 1

Related Questions