ChrisD
ChrisD

Reputation: 1239

Why is the transaction log growing so large?

I'm performing an update on a DB that is inserting a 15 digit number into 270,000,000 rows of a single column. I think the space required should be around 4GB but it is still running and the transaction log has just hit 180GB.

Upvotes: 0

Views: 137

Answers (2)

usr
usr

Reputation: 171178

Probably, all pages split due to the significant amount of data added (4/180 = 2.2%; might not seem significant but probably pushes many pages over the edge).

Rebuild the clustered index with a fillfactor (probably 90 is enough). Then, you will not have any page splits when updating.

If this does not help we need to dig deeper.

In any case there will be significant log growth and it will be bigger than 4GB for sure. 180 sounds too much. That sounds like whole pages are stored.

Upvotes: 0

Mr. Llama
Mr. Llama

Reputation: 20889

Transactions have to store a lot of information just in case the changes need to be rolled back.
There needs to be a sequential value to know which order the records were updated/inserted. It needs to store the original value for the column (some RDBMSs might even store the whole row!). It needs a unique identifier to tie the data back to the row's location.
It has to store so much data because if something catastrophic happens -- like the database crashing -- it needs to be able to return to a consistent state.

Yes, 15 digits * 270 mil may come out to 4 GB, but that completely ignores all of the very important metadata required.

If this is a one-off update that doesn't need to be repeated, it may be faster to simply recreate the table with the column updated. Compared to inserts/updates/deletes, table creates from selects require almost no transaction logging.

Upvotes: 1

Related Questions