Reputation: 634
I have a copy pipeline set up in Azure Data Factory, which copies everything in Cosmos DB daily to Azure Data Lake. When copying. there is a spike on RU/s. I donot want to increase Throughput.
Anything I can do to lowering the impact? e.g. can I set a limit to the copying pipeline?
Upvotes: 3
Views: 832
Reputation: 23782
As @David said in the comment, any interactions with Cosmos DB requires the consumption of Rus. RUs setting is an important indicator of fees and performance. More details, you could refer to this official article.
Basically, RU metrics will be shocked by the adf copy activity and throughput setting will not be automatically adjusted by cosmos db.
If you do want to adjust throughput setting temporally,you could execute http trigger azure function with azure function activity which is accessed at head and tail of copy activity. In that activity, adjust throughput settings appropriately with sdk or rest api.(Please refer to the case:Cosmos Db Throughput)
Upvotes: 1