Ph0en1x
Ph0en1x

Reputation: 10087

mongodb simple query speed if index much bigger then available RAM

I have a very simple data structure. Let's assume that documents in collection will looks like this:

{
 _id: "...",
 indexedField: "value 1",
 ...
}

indexedField data will be indexed.

The problem for me is that amount of that document will be really huge. Let's think like 1 billion. But the machine who will handle that db do not have a lot of memory, maybe like 4Gb, not more.

most of the queries I need to be run looks like this:

db.collection.find({indexedField: "queryValue"}).skip(offset).limit(100)

So the question is - will it perform well or will demonstrate poor performance because of memory swapping.

Upvotes: 0

Views: 111

Answers (1)

DhruvPathak
DhruvPathak

Reputation: 43265

That would depend on two factors:

  1. The data field being indexed, if the data field is integers, the index size would not be much, and you shall be fine. You can do one more optimization by overwriting mongodb "_id" with integer based keys if they are unique ( keeping in mind autosharding and future scaling though ).

  2. db.collection.find({indexedField: "queryValue"}).skip(offset).limit(100)

This query is expensive, and keeps on getting slow as you increase the offset, since mongodb will fetch full records, and then do a scanning to skip and provided LIMIT N documents. So,if a large number of documents match "queryValue" and offset is high, the query will be slow.

Upvotes: 2

Related Questions