Faraz Ramtin
Faraz Ramtin

Reputation: 345

Determining whether an integer program is infeasible

Suppose we have an integer or mixed-integer program with a couple of thousand of constraints.

How can we determine whether this IP / MIP is feasible?

Upvotes: 4

Views: 1412

Answers (2)

mattmilten
mattmilten

Reputation: 6706

One thing you can say for sure is that if the linear relaxation of the problem is already infeasible so is the integer programming problem.

Upvotes: 3

willeM_ Van Onsem
willeM_ Van Onsem

Reputation: 476584

Suppose we have a integer or mixed-integer programming problem with couple thousand of constraints.

The number of constraint do not necessary scale with infeasibility: usually constraints limit the amount of possibilities that have to be enumerated. Another related problem is random 3-SAT, where for instance the hardest problems are those where the number of constraints scale approximately with a factor four compared to the number of variables.

How we can realize this problem is feasible at all?

There are good (Mixed) Integer Programming Solvers available that can solve some (hard) problems in reasonable time. Nevertheless generic Integer Programming problems are known to be NP-hard. This means that it is not very likely to find algorithm that solves these problems in general in reasonable time. Sometimes we are lucky and does the Integer Programming problem has some structure we can exploit to efficiently find a solution, but as said: in general it is a hard problem.

Solvers usually work with a branch-and-bound where through relaxation the domains of the variables are restricted, until a stable condition is reached. Then the solver picks a value for one of the variables (which variable and what value first are heavily researched since these have a large impact on finding the solution). Then the problem is relaxed further until either it is proven that no solution exist with that value, or the system has to assign a new value. If a model is proven to be unsatisfyiable with a given set of assigned variables, the system backtracks: it undo's some variables that are assigned and reassigns values to these and continues the search. Eventually a solution will be found (but it can take a very long time), or the solver can prove the problem to be unsatisfiable (no solution exists). In case a solution is found, we are not done yet: since we usually are interested in an optimal solution with respect to some optimization function. In that case we add a constraint that from now on, we look for solutions that are better than the thus far founded one. We keep doing this until we run out of new solutions, in which case we have proven optimality.

In case it is easy to find correct solutions (with respect to the hard constraints), but hard to obtain the best solution, one can use metaheuristics to approximate the best solution. Here one considers a "solution space" of solutions that satisfy the hard constraints. By constructing a number of "mutation functions" that take a valid solution and turn it into another solution, one can generate an algorithm that searches for the best solution, by iteratively manipulating the solution. If we run out of time, we return the best solution thus far. Although we never have guarantees that we have the optimal solution, usually metaheuristics work quite well, and return a close to optimal solution. Some metaheuristics like Simulated Annealing can make statistical guarantees with respect to the quality of the solution.

Upvotes: 3

Related Questions