Reputation: 507
I have a Mongo v3.4 DB on MLab's M2 instance with 3.5 GB RAM on a dedicated server.
I have 500,000 documents storing tweets with the body of the tweet in a single string field. I have a $text index on that field.
When I query that field, it can take anywhere from a few seconds to over two minutes. My query is:
[{
"$match": {
"$text": {
"$search": "game losing"
}
}
},
{
"$sort": {
"score": {
"$meta": "textScore"
}
}
},
{
"$limit": 10
}]
I have reviewed the following posts:
And have incorporated responses to them, but I still have very poor performance.
I do update the tweets consistently with any new stats (i.e. the like count of a tweet went up).
Upvotes: 3
Views: 2527
Reputation: 258
MongoDB is specially designed to process millions of documents. 500K is a very small dataset.
I think you are loading all the data (when a user searched for that item) simultaneously on the client-side, and Paginates it. It's causing server timeout. Because data also can be so huge.
What you can do here is: Use Aggeration Framework
. Limit the no. of data to send from the Atlas server.
See the docs here: https://docs.mongodb.com/manual/aggregation/
See the Examples
in that docs. :)
Upvotes: 1