Richard Mao
Richard Mao

Reputation: 302

Mongodb 3.2 wiredTiger heavy writes with periodic 100% disk utilization

My mongodb 3.2.18 standalone instance has 128GB ram , wiredTiger with amazon ebs io1 ssd disk xfs format. 100% jobs are writes . I have 5 collections in a database which has around 0.8 billion documents each .

For every 30-60 seconds, i just found out there are 10seconds-15senconds 100% disk utilization from iostat and I saw same data from mongostat aw increased to 20-50 for around 10-15s . I do not think my writes were getting really high but wondering what is the root cause for this periodic high disk utilization . It impacts my writes speed a lot . Below are the statistics .

normal statistics from iostat

100% disk utilization from iostat

mongostat aw increased to 40 for 10 seconds

Upvotes: 0

Views: 645

Answers (2)

Richard Mao
Richard Mao

Reputation: 302

I found out the main reason is "unnecessary index " for the collection when updating

I am using the command below to update user information with a list of an array into collection "mongo_user" . id_time_dict is a dict with "post id" and "post time"

db_mongo.mongo_user.update({ "_id": str(user_id) }, { "$addToSet": { "post": {"$each": id_time_dict} } })

I have "post" field as the index , which leads to heavy reads from iotop command . Usually 25m/s to 120m/s reads loading resulted in periodic 100% disk utilization . After dropping the index , usual reads loading is 2-5m/s .

Upvotes: 0

dnickless
dnickless

Reputation: 10918

While this is not a proper answer it is too long to simply throw in a comment...

There have been a number of JIRA issues around in the past where users reported a somewhat similar behaviour which seemed to be related to cache eviction (mind you some were for earlier version like 3.0) - here are just a few of them:

Your issue might not be 100% identical to the ones above but I could imagine that upgrading to a more recent version of MongoDB might solve your problem. Also it might be worth attempting to temporarily increase your AWS subscription and see if the problem disappears.

Upvotes: 0

Related Questions