fuenfundachtzig
fuenfundachtzig

Reputation: 8352

For which types of problems are genetic algorithms an appropriate choice?

(I've pondered some time over whether this is a good question for a Q&A site. I think it is -- the answers may be partly subjective but I'm hopefully looking for objective criteria in the answers. It may be though that SO is not the best site in the network but let's try here.)

I am asking this question because just recently I have tried to use a genetic algorithm for an optimization problem with a large parameter space, and it failed, i.e. it wasn't improving over time. I switched to simulated annealing and got much better convergence. So either my GA implementation was wrong or it was the wrong tool for the job.

One typical example for an application of a GA is the Travelling Salesman Problem, the genome being the order in which the cities are visited. This makes sense: "Breeding" or "crossover" mixes and rearranges subroutes that are potentially already optimal locally but by changing the order in which these subroutes are visited, the overall efficiency may improve. A similar example would be the knapsack problem.

Another example I have encountered was using a GA to find the maximum of a 2-D function, the genome being the coordinates. This was probably chosen as a simple pedagogical example to focus on how a GA can be implemented, but then again whether a GA is the right choice here in the first place seems less than clear: Breeding just "arbitrarily" swaps coordinates or -- in this implementation -- takes the average of the individual genomes, losing all of the optimization history. Thus to me, this is an example of a problem that is intrinsically not suited for a GA, because mixing genomes (coordinates), which is the core idea of GAs, would not seem beneficial here. And without it, i.e. when reduced to only mutations, a GA is just a random walk in phase space with restarts.

In general, GAs seem to be apt for problems with discrete solution spaces with high combinatorial complexity rather than exploration of continuous phase spaces. What are other aspects to be considered?

Upvotes: 1

Views: 704

Answers (2)

Willem Hendriks
Willem Hendriks

Reputation: 1487

It is really difficult to predict the outcome of both SA and GA (just like life itself?).

The mutation of the GA is somehow similar to the neighbour proposal of SA, so a key difference is the breeding part of the GA.

Say x,y are both proposal solutions to your problem.

If it could make sense to take 1 part of x, and 1 part of y to make a better solution, GA has potential.

You have to use a combination of gut feeling and experiments to check if above is true.

For searching through parameter spaces that could indeed make sense, if parameter alpha and beta work well for 1 case, you might want to take the gamma from an other proposal solution that worked well.

For minimising a function in 2D (say 0 is minimum), this usually doesn't make much sense, if (a,b) is close to 0, and (c,d) is close to 0, (a,d) and (c,b) are just as good as any other random guess (in many cases).

Hill climbing based searches would make more sense in the latter. The breeding part of GA's is where its different from hill climbing.

Upvotes: 1

Ash
Ash

Reputation: 4718

Two situations that come to mind where genetic algorithms are useful are these (I'm not an evolutionary computation expert, so feel free to correct me):

  1. Optimization without objectives. Some objective functions result in very irregular loss landscape, so that gradient-based methods only lead to local minima. In that case, it's interesting to explore the space based on "behavioral novelty" or "creativity". See the classical novelty search paper for example. In simple terms, you map sequences of states (of the robot, agent or whatever) to behaviors in a behavior space, and try to uniformly search that behavioral space. Sequences of states are investigated based on how novel they are (e.g. based on their distance to older solutions in behavior space). This is also useful when you are interested in having diverse solutions that perform well relative to their neighbors (e.g.evolving diverse virtual creatures with novelty search and local competition).

    Note that here, it makes sense to use genetic algorithms as you have neither objectives nor gradient information. You can't use simulated annealing either as it is tied to a single objective function, and doesn't care about behavioral novelty.

  2. When we're not interested in a single solution, but that we rather want to know which parts of space present a high probability of yielding good solutions. A way to do that is to define a probability distribution with parameters, say, theta and sample from it, then compute an expectation/average over the sampled solutions, and use it's gradient relative to the theta to optimize the distribution (note that this gradient is not relative to the initial search space, where we don't want to computer gradients).

Upvotes: 1

Related Questions