Arcadian
Arcadian

Reputation: 4350

SQL Transaction log growth

My transaction log file is growing each day to a very large size.

Is it the way some of my Stored Procedure and how I do inserts that could be creating the problem?

Is there a way to minimize the log size by re-writing queries?

Is there something in the query that causes the log size to grow faster?

Upvotes: 1

Views: 1407

Answers (3)

Ben J. Boyle
Ben J. Boyle

Reputation: 306

You haven't mentioned what your backup strategy is, or the recovery requirements. If you genuinely don't need to be able to restore t a point in time and would be happy with being able to restore to your regular backup point you could also consider setting the recovery mode to simple.

Probably not recommended in most scenarios, but it's a possibility.

If you do need the detailed logs, then as others have said you'll need to balance your tlog backups against file growth. If the log file is growing too large, too quickly, then back it up more frequently.

Article from RedGate: https://www.simple-talk.com/sql/learn-sql-server/managing-transaction-logs-in-sql-server/

Upvotes: 1

John Eisbrener
John Eisbrener

Reputation: 672

Please don't shrink the Transaction Log file unless there's a good reason for it. If the TLog is growing daily, the first thing you should do is increase the frequency of your TLog backups. If you're still seeing that the TLog is growing daily, you may be running into an issue preventing the vlog header from circling back to the head of the file. This often presents itself as the initial record returned by running DBCC LOGINFO having a status of 2 followed by a lot of records with a status of 0. To fix this problem you can reference my answer to a similar DBA.SE question, here.

Upvotes: 0

Ven
Ven

Reputation: 2014

There are various considerations to investigate when log file grows bigger. I would follow step wise checklist

1) Find high impact queries , use the following SQL

SELECT TOP 5 t.TEXT AS 'SQL Text'
    ,st.execution_count
    ,ISNULL(st.total_elapsed_time / st.execution_count, 0) AS 'AVG Excecution Time'
    ,st.total_worker_time / st.execution_count AS 'AVG Worker Time'
    ,st.total_worker_time
    ,st.max_logical_reads
    ,st.max_logical_writes
    ,st.creation_time
    ,ISNULL(st.execution_count / DATEDIFF(second, st.creation_time, getdate()), 0) AS 'Calls Per Second'
FROM sys.dm_exec_query_stats st
CROSS APPLY sys.dm_exec_sql_text(st.sql_handle) t
ORDER BY st.total_elapsed_time DESC

2) Once you know the logical reads/writes , you can follow up whats the impact of sp or query Optimize it.

3) You can always shrink log file, but i would say analyse the considerations in shrinking log file ( Prod environment, execution time, impact for other users)

4) If you think It will have a minimal impact in shrinking log file use this sql DBCC SHRINKFILE (N'My DB_Log' , 1000) Please Check the log file size, i wont shrink to 1. Read this blog for more info DBCC SHRINKFILE on log file not reducing size even after BACKUP LOG TO DISK

Upvotes: 2

Related Questions