Reputation: 8775
I need to store a large number (~50,000) of small pieces of data in RavenDB. When I read this data back I'll be reading the whole lot every time. When writing, I could either write the whole lot or each individual piece.
The data looks like this:
public class Item
{
public int Id { get; set; }
public long Value { get; set; }
}
I could just as easily store this as a document wrapper around a single Dictionary<int, long>
rather than a collection of Item
objects.
Which of these approaches is more efficient in RavenDB?
If my data set increased to ~500,000, would the difference in efficiency be exaggerated (in the read case)?
Upvotes: 1
Views: 229
Reputation: 4492
What are those pieces of data? How are they connected? How often are you going to read this set? how many different sets will you have? how often will they change? Do you always need them all in one go?
The question is what your transactional boundary is in this model. As long as when you change a part of it the entire thing changes to some extent, it makes sense to put it in one document.
A document that grows too large is not a problem when you can have it cached - look up Aggressive Caching.
Upvotes: 2