Reputation: 887
I have a delimited file that I split in Mule and process async. I then aggregate each response using the collection-aggregator which works fine.
But in production environment, this file will be huge and it will be a problem trying to aggregate all the results into a collection in memory.
Is there another way to aggregate massive amounts of data in mule without holding all this data in memory?
Upvotes: 0
Views: 146
Reputation: 5115
There is nothing like that out of the box. Probably, the ideal thing would be to leverage the new Scatter Gather with a custom-aggregation-strategy that stores in disk rather than in memory.
Upvotes: 1