StackOfStacks
StackOfStacks

Reputation: 83

Document update resulting in collection greater than 10GB

How does DocumentDb handle the case, when a document update results in exceeding the collection size (10 GB). Say I have 50K documents in one of my collection and then I update all of the documents to include an additional JSON section that could exceed the collection size.

What are the best practices to handle this case and is there built in support to handle this scenario (e.g. Move that document to another collection).

Upvotes: 0

Views: 118

Answers (1)

David Makogon
David Makogon

Reputation: 71055

There's no specific best practice, but you have specific things built into DocumentDB to help you make proper decisions:

  • x-ms-resource-usage is a header returned on your queries. Among other things, collectionSize will report total consumption within your collection, including overhead from indexes, etc. You can compare that to collectionSize in the x-ms-resource-quota header returned (which should equate to 10GB), to know how much overhead you have remaining. There's a bit more detail in this answer.
  • The various language-level drivers provide partitioning support. When you realize you need to span multiple partitions, you can implement a partition resolver, to allow content to be written across multiple partitions. There are several answers covering partitioning thoughts, such as this one posted by Larry Maccherone. And the DocumentDB team published an article on partitioning, here.
  • You're probably aware already, but: you can check for HTTP 403, which is returned when trying to insert documents and exceeding collection size. All error codes are documented here.

Regarding your question about moving documents to different collections: That's ultimately going to be your call whether to do this within your code or by taking advantage of partition resolvers.

Upvotes: 1

Related Questions