Reputation: 1093
I have a table in my data warehouse with 300M+ rows which is ideal for a clustered columnstore index, but I’m not sure how to go about applying it.
Should I create an empty replica of the table with the columnstore index in place, and select my data into it, or should I drop all the normal indexes on the existing table and add the columnstore index directly to it?
I have 12 CPU cores and 100GB of RAM dedicated to SQL Server 2019. I’ll do the update during a maintenance window and will have max server resources assigned to this task. Which method of adding the index would be faster?
Upvotes: 1
Views: 1431
Reputation: 1093
Reading up on the Microsoft documentation it seems like I will be doing it a third way. Since SQL Server 2016 you don't have to drop existing indexes, and in fact they are very useful to the ColumnStore index.
What I will do is create a Clustered ColumnStore index over the top of the existing "normal" RowStore indexes without dropping them, and use the DROP EXISTING option.
With this option enabled, SQL Server uses the existing RowStore indexes to order the data in the ColumnStore, which then behaves just like a regular index. It even preserves the names of the indexes and preserves all the existing metadata and statistics. Execution plans which used the existing indexes will continue to be valid.
This way I don't have to sacrifice the single-value-lookup performance I get with my Clustered Primary Key and my individual indexes that I've tuned for certain queries, they just get incorporated into the ColumnStore and continue to be available! Fantastic!
Upvotes: 3