ctmcklowe96
ctmcklowe96

Reputation: 97

SQL 2005 Log File Initial Size

I've a question regarding the SQL Log File size and what it should be set at after a log file backup. I know it depends on a lot of factors, and that there is no right or wrong amount (relatively speaking, as I'd not start the log at the 1 mb as is the default), but how many VLF's should there be in the log file if we did around 200 mb of transactions (or our .mdf file grew by that much), and how big would those VLF's be or how big would the log file be? I've read the blogs from Kimberly Tripp and Paul Randal but it's still foggy at best.

Upvotes: 0

Views: 218

Answers (2)

paparazzo
paparazzo

Reputation: 45096

History is typically your best predictor of future size.
It the log typically gets to 20 gb between backups then you can expect it to get to 20 gb between backups.

If you size for 20 gb then it is not likely to have to grow.

If the log has to grow 20 or even 100 times then not that big of a deal.
If the log has to grow 1000 times then you for sure you should have sized it wrong.
A 1 mb log that only grows 1 mb at a time and gets to 1000 mb is bad.
A 100 mb that grows by 100% would only have to grow 4 times to get to 1600 mb.
There is a max size also.
Recovery model has an enormous impact on log size.
There is a lot more to consider than the initial size.

Also look at putting the log on a separate drive to spread IO.

Look at putting temp on a separate drive to spread IO.

Upvotes: 1

souplex
souplex

Reputation: 981

Like you say, it is very much an 'it depends' answer. I might oversimplify things a bit, but usually I picture the process like this:

Before 'stuff' is written to the data files, every transaction is first written to log. The log never get's cleared. When the log file 'rolls over', the write pointer is moved back to the top of the file and continues writing over the old stuff again.

  • So the log needs at least to be the size of the largest transaction log size.

  • Next consideration is the number of concurrent users. When you have 10 users doing the largest transaction at the same time, well you get the point.

  • Another consideration is usage intensity. How much data accumulates before a checkpoint happens. This is when the log data is written to the mdf files in a batch like manner.

  • And a final thing to consider is your recovery mode. In full recovery mode, the part of the logfile that has not yet been 'protected' by a transaction log backup, can not be rolled over and will continue growing until a transaction log backup has been made.

You only need a single physical log file per database. Sql server can only write to one file at a time.

You would want to have as few virtual log files as possible. Giving the log file enough initial space to start with and from there making the least auto-grows as possible. VLF's are created by auto-growth process.

And when you auto-grow, make sure it is growing in with decent chunks. Also remember that each vlf, needs to be zero'd out first. The file needs to be completely filled with zero's before it can be used. When you specify very big chunks(say 1 GB or so), every query 'in fight' on this database will be queued until the zero-out process is completed.

When I don't have a clue about log needs, I personally start out with 500 MB size, with an auto-growth of 100 MB chunks. And from there on I monitor the auto-growth events and adjust where needed.

Upvotes: 1

Related Questions