Primico
Primico

Reputation: 2465

"Request rate is large" while running mongorestore to CosmosDB

I'm attempting to dump and restore from my local Mongo DB to Azure Cosmos DB and I get the error "Request rate is large" My database is 9.3MB with 116 collections. I'm guessing by restoring collection by collection it will work. Is this the only way? Or to move to next pricing tier?

Upvotes: 2

Views: 1929

Answers (1)

Nick Chapsas
Nick Chapsas

Reputation: 7198

In Cosmos DB there are no pricing tiers, but rather provisioned throughput on the collection and less commonly, database level.

The reason why you are getting a 429 Request rate is large error is because you are hitting CosmosDB with more RU/s than the ones provisioned. This has nothing to do with the volume of your database but rather the request rate you are hitting cosmos with.

You can prevent this from happening by increasing the provisioned throughput from the Scale settings in the Azure portal, or increasing the retries in case of a throttled request on the SDK level.

The increase can be done momentarily in order to import the data and then scale it back down.

However, having 116 collections for 9.3MB of data is not a good idea in CosmosDB as you will be charged for a minimum of 400 RU/s per collection. I would recommend you read more about CosmosDB pricing and CosmosDB errors.

Error codes: https://learn.microsoft.com/en-us/rest/api/cosmos-db/http-status-codes-for-cosmosdb

Pricing: https://azure.microsoft.com/en-gb/pricing/details/cosmos-db/

Upvotes: 3

Related Questions