Reputation: 143
Dears,
I'm playing with num_search_workers parameter and I discovered a strange behaviour with or-tool 7.5 on windows.
I did the following tests on a 32 core machine and I discovered that 1 thread has the best performances.
Do you know why?:
start to solve using 1 threads ... solved in 13.578 secs
start to solve using 2 threads ... solved in 45.832 secs
start to solve using 4 threads ... solved in 53.031 secs
start to solve using 8 threads ... solved in 62.013 secs
start to solve using 16 threads ... solved in 157.5 secs
start to solve using 32 threads ... solved in 807.778 secs
start to solve using 64 threads ... solved in 386.252 secs
the model is more or less like the following:
consider that self.suggested_decisions is a dictionary of BoolVars (the decision variables)
the problem is like:
model.Add(sum(self.scenario.constants['scaling_factor']*self.suggested_decisions[r][0] for r in self.all_records)>=sum(sum(self.suggested_decisions[r][d]*(int(0.60*self.scenario.constants['scaling_factor']))for r in self.all_records) for d in self.all_decisions))
model.Add(sum(int(self.scenario.dataset['AMOUNT_FINANCED'][r])*self.suggested_decisions[r][0] for r in self.all_records)>=2375361256)
model.Add(sum(self.scenario.constants['scaling_factor']*self.scenario.dataset['Bad'][r]*self.suggested_decisions[r][0] for r in self.all_records)<=sum(self.suggested_decisions[r][0]*int(self.scenario.constants['scaling_factor']*0.038) for r in self.all_records))
model.Maximize(sum(int(self.scenario.dataset[\'AMOUNT_FINANCED\'][r])*self.suggested_decisions[r][0] for r in self.all_records))
Upvotes: 1
Views: 1894
Reputation: 11034
Welcome to the world of parallelism.
1 to 8 threads, you are just unlucky. Communications between workers change the search and slows it down.
Above 8 threads, you are most likely memory bound.
This being said, this is very rare.
Could you send me the model ?
Upvotes: 1