Tom Robinson
Tom Robinson

Reputation: 8508

Can a partitioned CosmosDB / DocumentDB collection have fewer than 400 RU/s of throughput configured?

Update: This question is now invalid as the events I'd thought happened didn't happen quite as I'd thought (see below for details). I'm leaving the question as-is though as the answers and comments may be useful to others.

I've created a collection via the Azure Portal, configured initially with:

Then through the .NET SDK I've changed the Initial Throughput Capacity (RU/s) to 400.

According to the Scale & Settings tab for the collection in the Azure Portal the value of Throughput (400 - 10,000 RU/s)* is 400.

Is this a supported configuration? I'm assuming this is a bug somewhere but perhaps it isn't? What would I be charged for this collection?

As an aside...

The Add Collection screen doesn't allow me to set the Throughput to 400 on initial creation but it seems I can change it afterwards.

enter image description here

Update: I think I've worked out what happened. I manually created a partitioned collection, then I forgot that my code (an importer/migration tool I'm working on) deletes the database and recreates the database and collection on startup. When it does this, it's created as a non-partitioned collection. Now that I've corrected this, I get the error "The offer should have valid throughput values between 2500 and 100000 inclusive in increments of 100." if I try to reproduce what I thought I'd managed to do before.

Upvotes: 0

Views: 1601

Answers (1)

David Makogon
David Makogon

Reputation: 71035

You're not seeing a bug. You're attempting to set an RU range on a partitioned collection.

Single-partition collections (10GB) allow for 400-10000 RU.

What you're showing in your question is a partitioned collection, with scale starting at 2500 RU.

And you cannot configure a partitioned collection for 400 RU, whether through the portal or through API/SDK.

Upvotes: 1

Related Questions