Reputation: 1
I am seeing pretty much the same behavior as this issue here which I can't comment on as I don't have enough reputation. I have an API that is using cosmos and I have a daily batch job that needs to write into that cosmos instance. When the job kicks off it greatly affects API performance. Scaling up to an insanely high RU solves the issue but it solves it rather or not the "Write throughput budget " setting is set. So let's assume the API was using 2500 RUS. If I then use Data Flow to scale to 5000 RUs with a 2500 RU threshold it should not affect API performance as there are 2500 RUs still available that data flow is not touching. That is not the case though and we do still a direct performance hit to the API once the data flow job starts copying into CosmosDB. I guess my question is: what exactly is "Write throughput budget" and how do you use it effectively?
Upvotes: 0
Views: 610
Reputation: 11395
The "write throughput budget" refers to the provisioned or allocated capacity for write operations within the database service.
Azure Cosmos DB uses a provisioned throughput model, where you allocate Request Units per Second (RU/s) to handle read and write operations.
Consider using Azure Cosmos DB's auto-scale feature, which dynamically adjusts provisioned throughput based on actual usage patterns. Choose serverless options for scenarios with sporadic or unpredictable write workloads, where resources are allocated on-demand.
Upvotes: 0