O.O
O.O

Reputation: 11287

Update statement running for too long or not

I'm new to working with this much data (20 million rows) and I don't know what I should expect as far as query duration:

update table set field = '1234'  

No index on field. This statement took 25 minutes. The database is set to Simple Recovery. Does 25 minutes seem too long? Table has 9 columns with small data types <50 varchar.

Upvotes: 8

Views: 6507

Answers (3)

Peter Tillemans
Peter Tillemans

Reputation: 35331

You are updating 20 Mio records in about 1500s averaging at somrthing of 7000 updates per second. Sounds about right.

Upvotes: 2

Remus Rusanu
Remus Rusanu

Reputation: 294227

IF you updated 20M rows in one single transaction, then your time was entirely driven by your IO subsystem: what kind of drives you have, what disk files layout etc. If you have 40 spindles in raid 10 with 4 balanced files and a separate similar battery for the log then the result is worryingly slow. If you tested this with one single MDF that shares the spindle with the LDF on a single consumer quality 5000rpm HDD then your time is amazingly fast.

Upvotes: 13

Daniel Renshaw
Daniel Renshaw

Reputation: 34177

  • Are there any indexed views referencing that field?
  • Is that field published in a (transactional) replication scheme?
  • Are other sessions accessing that table at the same time the update is running?
  • Are the log and data files stored on separate disks (physical, not two different partitions of the same hardware)?
  • Are there any check constraints referencing that field?
  • Are there any triggers on that table?

All of these, and likely many other factors, will affect data modification performance.

Otherwise, try batching the update using TOP and a WHERE clause that finds only umodified rows.

Upvotes: 1

Related Questions