Arek
Arek

Reputation: 53

Benchmark vs Solver - same data, different result

Currently, we are implementing timetable planning with optaplanner - overall works great! But we are trying to do some improvements on how our solver works - try to use different algorithms etc. So we used benchmark with simple config: common heuristic phase, and than HILL_CLIMBING, LATE_ACCEPTANCE and TABU_SEARCH and this are results

Benchmark:

HILL: 0hard/-5medium/-5soft
LATE_ACCEPTANCE: 0hard/-5medium/-126soft
TABU: 0hard/-7medium/-4soft

At this where is starting to be tricky - I'm coping solver configuration and using the same data set and I have very different results:

Solver with the same dataset:

HILL: 0hard/-11medium/-7soft
LATE_ACCEPTANCE: 0hard/-5medium/-121soft
TABU: 0hard/-11medium/-18soft

So it seems that only LATE_ACCEPTANCE is close to benchmark - but others are way off - any idea why its behave like that?

Upvotes: 0

Views: 117

Answers (1)

Radovan Synek
Radovan Synek

Reputation: 1029

Assuming that both the solver and benchmark use the default, REPRODUCIBLE environment mode, might it be caused by different termination conditions?

Note that even if you use the same time-based termination, it may not be fully reproducible due to context switching. To make sure every run with the same configuration ends up with exactly the same score, you can use a step-based termination.

Please check the INFO-level logging; each phase reports there the best attained score and the number of steps it took.

Upvotes: 1

Related Questions