Reputation: 2973
I've huge a table (a few billion rows) with a clustered index and two non-clustered indices.
A BULK INSERT operation produces 112000 reads and only 383 writes (duration 19948ms).
It's very confusing to me. Why do reads exceed writes? How can I reduce it?
update query
insert bulk DenormalizedPrice4 ([DP_ID] BigInt, [DP_CountryID] Int, [DP_OperatorID] SmallInt, [DP_OperatorPriceID] BigInt, [DP_SpoID] Int, [DP_TourTypeID] Int, [DP_CheckinDate] Date, [DP_CurrencyID] SmallInt, [DP_Cost] Decimal(9,2), [DP_FirstCityID] Int, [DP_FirstHotelID] Int, [DP_FirstBuildingID] Int, [DP_FirstHotelGlobalStarID] Int, [DP_FirstHotelGlobalMealID] Int, [DP_FirstHotelAccommodationTypeID] Int, [DP_FirstHotelRoomCategoryID] Int, [DP_FirstHotelRoomTypeID] Int, [DP_Days] TinyInt, [DP_Nights] TinyInt, [DP_ChildrenCount] TinyInt, [DP_AdultsCount] TinyInt, [DP_TariffID] Int, [DP_DepartureCityID] Int, [DP_DateCreated] SmallDateTime, [DP_DateDenormalized] SmallDateTime, [DP_IsHide] Bit, [DP_FirstHotelAccommodationID] Int) with (CHECK_CONSTRAINTS)
No triggers & foreign keys Cluster Index by DP_ID and two non-unique indexes(with fillfactor=90%)
And one more thing DB stored on RAID50 with stripe size 256K
Upvotes: 3
Views: 1207
Reputation: 238176
For each row you insert, the database has to:
These activities might cause a lot of reads, and they have to be done for each row. So it's perfectly normal to see hundreds of reads for a single row insert. In fact, I'd take it as a healthy sign that the database is doing its work.
Upvotes: 2