Brian Pace
Brian Pace

Reputation: 155

How to eliminate the 40MB query limit of MongoDB aggregate queries with Cosmos API

I have two collections inside db called sheets, called balancesheet and income, which need to be joined on a field called "_id"

I am trying to perform aggregation on two moderately large collections, and I am setting the limit to 1 in order to only get one result.

However, I am still hitting the limit of 40MB, when I am certain that one result will not reach 40MB"

uri = "connection string"
client = pymongo.MongoClient(uri)
db = client.sheets
pipeline = [{'$lookup': 
                {'from' : 'balancesheet',
                 'localField' : '_id',
                 'foreignField' : '_id',
                 'as' : 'company'}},
            {'$limit': 1},

             ]

for doc in (db.income.aggregate(pipeline)):
    pprint (doc)

running the following code will net me this error:

"OperationFailure: Query exceeded the maximum allowed memory usage of 40 MB. Please consider adding more filters to reduce the query response size."

Is there a way to solve this problem with the limit?

Upvotes: 1

Views: 1640

Answers (1)

angoyal-msft
angoyal-msft

Reputation: 81

Thanks for your feedback. There are other users as well facing the similar issue. This issue has been escalated to Product Group and they are actively working on improving agg. fwk and post-GA will remove this limit.

In the meanwhile you can use below workarounds : 1) reducing the fields used from each document 2) reducing the overall number of documents covered by the query.

Reference GitHub thread: https://github.com/MicrosoftDocs/azure-docs/issues/16997/

Please let us know if you still have some concerns.

Upvotes: 2

Related Questions