Reputation: 7042
How are you guys inserting millions of rows of data to SQL Azure? Are you dropping the index?
I'm inserting about 15 million rows in a full batch and the tables have number of indexes. I dropped the indexes and inserted the data and when I try to recreate the index using sql query, the Sql Management Studio times out (host disconnected) because it takes long time to create the index on those millions of rows and the indexes are not created. I still have the clustered index.
How should I handle this situation?
Upvotes: 2
Views: 434
Reputation: 89
My problem with large amount of data was solved using user-defined tabletype as parameter of the stored-procedure, like we can found in this link: http://blogs.staykov.net/2011/04/table-valued-parameter-procedures-with.html
I did'nt believe when i saw the result.
Upvotes: 0
Reputation: 3461
In SQL Azure you will need to keep the clustered indexes so that you can insert data. SQL Azure doesn't allow you to have tables without clustered indexes as it uses these to handle the sync between your database in the backups.
We have been moving a lot of data around recently and found SQLBulkCopy to be the quickest. Reliability can be helped by combining this with the Transient Fault Handling framework.
There are lots of examples out there how to achieve this.
Hope this helps.
Upvotes: 1