Reputation: 1351
I have an application which stores continuous data. Rows are then selected based on two columns (timestamp
and integer
).
To keep the performance as good as possible, I have to recalculate statistics for indices, but I have two problems with recalculating based on time interval:
I read that it is good to recalculate index statistics once per day in the time with minimum load and it is great advice for some web or company database, but this is completely different situation, so I would like to add some "intelligence" into auto recalculating.
Is there some number (42; 1,000; 1,000,000 ?) of rows per table after which the statistics should be recalculated? Is it depends also on the total number of rows currently in the table?
Upvotes: 1
Views: 1039
Reputation: 5481
Server uses statistics to select best possible index from available ones. Check plan of your query on non empty database. If it is optimal with current statistics and relative data distribution doesn't vary with time or there are just no other indices to choose from then there is no need in forced recalculation.
Other approach involve either direct specification of optimal plan with the query text or usage of arithmetic operations to exclude index on some field from evaluation regardless of actual statistics.
For example, if query contains condition:
table_1.some_field = table_2.some_field
and you don't want server to use index on field table_1.some_field then write:
table_1.some_field + 0 = table_2.some_field
This way you could force server to use one index over another.
Upvotes: 1