Reputation: 545
Scenario : I have a database which contains 100k records which have a 10 GB size in memory. My objective is to
To achieve this, I am thinking of the design as follows:
{
"group_1" : [...],
"group_2" : [...]
}
Questions/Concerns :
Case 1 : When reading from database select it loads all the 100k records in memory. Question : How to optimize this step so that I can still get 100k records to process but not have a spike in memory usage?
Case 2 : When segregating the data I am storing the isolated data in the aggregator object in reduce operator and then the object stays in memory till i write it into files. Question : Is there a way I can segregate the data and directly write the data in files in the batch aggregator step and quickly clean the memory from the aggregator object space?
Please treat it as a design question for Mule 4 flows and help me. Thanking the community for your help ad support.
Upvotes: 0
Views: 961
Reputation: 25699
Note that even if reducing all configurations it doesn't mean there will be no spike when processing. Any processing consumes resources. The idea is to have a predictable, controlled spike, instead of an uncontrolled one that can exhaust available resources.
Upvotes: 1