Joel Abraham
Joel Abraham

Reputation: 43

How to find neighboring solutions in simulated annealing?

I'm working on an optimization problem and attempting to use simulated annealing as a heuristic. My goal is to optimize placement of k objects given some cost function. Solutions take the form of a set of k ordered pairs representing points in an M*N grid. I'm not sure how to best find a neighboring solution given a current solution. I've considered shifting each point by 1 or 0 units in a random direction. What might be a good approach to finding a neighboring solution given a current set of points?

Since I'm also trying to learn more about SA, what makes a good neighbor-finding algorithm and how close to the current solution should the neighbor be? Also, if randomness is involved, why is choosing a "neighbor" better than generating a random solution?

Upvotes: 4

Views: 3494

Answers (1)

CaptainTrunky
CaptainTrunky

Reputation: 1707

I would split your question into several smaller:

  1. Also, if randomness is involved, why is choosing a "neighbor" better than generating a random solution?

Usually, you pick multiple points from a neighborhood, and you can explore all of them. For example, you generate 10 points randomly and choose the best one. By doing so you can efficiently explore more possible solutions.

Why is it better than a random guess? Good solutions tend to have a lot in common (e.g. they are close to each other in a search space). So by introducing small incremental changes, you would be able to find a good solution, while random guess could send you to completely different part of a search space and you'll never find an appropriate solution. And because of the curse of dimensionality random jumps are not better than brute force - there will be too many places to jump.

  1. What might be a good approach to finding a neighboring solution given a current set of points?

I regret to tell you, that this question seems to be unsolvable in general. :( It's a mix between art and science. Choosing a right way to explore a search space is too problem specific. Even for solving a placement problem under varying constraints different heuristics may lead to completely different results.

You can try following:

  1. Random shifts by fixed amount of steps (1,2...). That's your approach
  2. Swapping two points
  3. You can memorize bad moves for some time (something similar to tabu search), so you will use only 'good' ones next 100 steps
  4. Use a greedy approach to generate a suboptimal placement, then improve it with methods above.
  5. Try random restarts. At some stage, drop all of your progress so far (except for the best solution so far), raise a temperature and start again from a random initial point. You can do this each 10000 steps or something similar
  6. Fix some points. Put an object at point (x,y) and do not move it at all, try searching for the best possible solution under this constraint.
  7. Prohibit some combinations of objects, e.g. "distance between p1 and p2 must be larger than D".
  8. Mix all steps above in different ways
  9. Try to understand your problem in all tiniest details. You can derive some useful information/constraints/insights from your problem description. Assume that you can't solve placement problem in general, so try to reduce it to a more specific (== simpler, == with smaller search space) problem.

I would say that the last bullet is the most important. Look closely to your problem, consider its practical aspects only. For example, a size of your problems might allow you to enumerate something, or, maybe, some placements are not possible for you and so on and so forth. THere is no way for SA to derive such domain-specific knowledge by itself, so help it!

How to understand that your heuristic is a good one? Only by practical evaluation. Prepare a decent set of tests with obvious/well-known answers and try different approaches. Use well-known benchmarks if there are any of them.

I hope that this is helpful. :)

Upvotes: 4

Related Questions