Reputation: 166
I have created a collection in MongoDB that has four indexes (one for _id, one for sharding's key, and two other indexes for query optimization on fields f1 and f2) and it is sharded on an 8-node cluster (each node has 14GB RAM). The application is write
Updated: I am using WiredTiger as Database Engine.
The problem is that when I remove one of the secondary index (from f1 or f2), the insertion speed achieves to an acceptable rate, but when I add the new index back, the insertion performance drops rapidly!
I guess the problem is that the index does not fit on RAM and because the access pattern is near random, therefore the HDD speed will be bottleneck. But I expect that MongoDB loads all indexes into the RAM, because the total RAM of each node is 14GB, and the 'top' command says that MongoDB is using about 6GB on each node. The index size are as follow:
Each Node:
As you can see, the total index size is about 9.5GB, MongoDB is using about 6GB, and the available RAM is 14GB, so
Best Regards
Upvotes: 0
Views: 239
Reputation: 11671
Why the performance drops after adding new index
It's expected that an index slows write performance, as each index increases the amount of work necessary to complete a write. How much does performance degrade? Can you quantify how much it degrades and what performance change would be acceptable? Can you show us an example document and specify what the indexes are that you are creating? Some indexes are much more costly to maintain than others.
If the problem is about random access to index, why MongoDB does not load all indexes on RAM?
It will load what is being used. How do you know it's not loading the indexes into RAM? Are you seeing a lot of page faults despite having extra RAM? What's your WiredTiger cache size set to?
How can I determine which part of each index is loaded to RAM and which part didn't?
I don't believe there is a simple way to do this.
Upvotes: 0