Reputation: 1749
I'm trying to repeatedly insert about 850 documents between 100 - 300Kb into a cosmos collection. I have them all in the same partition key.
The estimator suggests that at 50K RUs should handle this in short order but at well over 100k its averaging 20 minutes or so per set rather than something more reasonable.
Should I have unique partition keys for each document? Is the problem that having the all the documents going to the same partition key, they are being handled in series and the capacity isn't load leveling? Will using the bulk executor fix this?
Upvotes: 0
Views: 488
Reputation: 23782
Should I have unique partition keys for each document? Is the problem that having the all the documents going to the same partition key, they are being handled in series and the capacity isn't load leveling?
You could find below statement from this doc.
To fully utilize throughput provisioned for a container or a set of containers, you must choose a partition key that allows you to evenly distribute requests across all distinct partition key values.
So, I think defining partition key is good for insert or query.However, the choosing of partition key is really worth a dig.Please refer to this doc to choose your partition key.
Will using the bulk executor fix this?
Yes,you could use continuation token in bulk insert.More details ,please refer to my previous case:How do I get a continuation token for a bulk INSERT on Azure Cosmos DB?.
Hope it helps you.
Just for summary, we need to evaluate the default indexes for collection.It may take 100 to 1000x more RUs than actually writing the file.
Upvotes: 1