Unknown
Unknown

Reputation: 67

Cplex/OPL local search

I have a model implemented in OPL. I want to use this model to implement a local search in java. I want to initialize solutions with some heuristics and give these initial solutions to cplex find a better solution based on the model, but also I want to limit the search to a specific neighborhood. Any idea about how to do it?

Also, how can I limit the range of all variables? And what's the best: implement these heuristics and local search in own opl or in java or even C++?

Thanks in advance!

Upvotes: 2

Views: 1997

Answers (2)

Ram Narasimhan
Ram Narasimhan

Reputation: 22506

Those are several questions. So here are some pointers and suggestions:

  1. In Cplex, you give your model an initial solution with the use of IloOplCplexVectors() Here's a good example in IBM's documentation of how to alter CPLEX's solution.

  2. Within OPL, you can do the same. You basically set a series of values for your variables, and hand those over to CPLEX. (See this example.)

  3. Limiting the search to a specific neighborhood: There is no easy way to respond without knowing the details. But there are two ways that people do this:

    a. change the objective to favor that 'neighborhood' and make other areas unattractive.

    b. Add constraints that weed out other neighborhoods from the search space.

  4. Regarding limiting the range of variables in OPL, you can do it directly:

    dvar int supply in minQty..maxQty;
    

Or for a whole array of decision variables, you can do something along the lines of:

range CreditsAllowed = 3..12;
dvar int credits[student] in CreditsAllowed;

Hope this helps you move forward.

Upvotes: 1

TimChippingtonDerrick
TimChippingtonDerrick

Reputation: 2072

Just to add some related observations:

Re Ram's point 3: We have had a lot of success with approach b. In particular it is simple to add constraints to fix the some of the variables to values from a known solution, and then re-solve for the rest of the variables in the problem. More generally, you can add constraints to limit the values to be similar to a previous solution, like:

var >= previousValue - 1
var <= previousValue + 2

This is no use for binary variables of course, but for general integer or continuous variables can work well. This approach can be generalised for collections of variables:

sum(i in indexSet) var[i] >= (sum(i in indexSet) value[i])) - 2
sum(i in indexSet) var[i] <= (sum(i in indexSet) value[i])) + 2

This can work well for sets of binary variables. For an array of 100 binary variables of which maybe 10 had the value 1, we would be looking for a solution where at least 8 have the value 1, but not more than 12. Another variant is to limit something like the Hamming distance (assume that the vars are all binary here):

dvar int changed[indexSet] in 0..1;
forall(i in indexSet)
  if (previousValue[i] <= 0.5)
    changed[i] == (var[i] >= 0.5) // was zero before
  else
    changed[i] == (var[i] <= 0.5) // was one before

sum(i in indexSet) changed[i] <= 2;

Here we would be saying that out of an array of e.g. 100 binary variables, only a maximum of two would be allowed to have a different value from the previous solution.

Of course you can combine these ideas. For example, add simple constraints to fix a large part of the problem to previous values, while leaving some other variables to be re-solved, and then add constraints on some of the remaining free variables to limit the new solution to be near to the previous one. You will notice of course that these schemes get more complex to implement and maintain as we try to be more clever.

To make the local search work well you will need to think carefully about how you construct your local neighbourhoods - too small and there will be too little opportunity to make the improvements you seek, while if they are too large they take too long to solve, so you don't get to make so many improvement steps.

A related point is that each neighbourhood needs to be reasonably internally connected. We have done some experiments where we fixed the values of maybe 99% of the variables in a model and solved for the remaining 1%. When the 1% was clustered together in the model (e.g. all the allocation variables for a subset of resources) we got good results, while in comparison we got nowhere by just choosing 1% of the variables at random from anywhere in the model.

An often overlooked idea is to invert these same limits on the model, as a way of forcing some changes into the solution to achieve a degree of diversification. So you could add a constraint to force a specific value to be different from a previous solution, or ensure that at least two out of an array of 100 binary variables have a different value from the previous solution. We have used this approach to get a sort-of tabu search with a hybrid matheuristic model.

Finally, we have mainly done this in C++ and C#, but it would work perfectly well from Java. Not tried it much from OPL, but it should be fine too. The key for us was being able to traverse the problem structure and use problem knowledge to choose the sets of variables we freeze or relax - we just found that easier and faster to code in a language like C#, but then the modelling stuff is more difficult to write and maintain. We are maybe a bit "old-school" and like to have detailed fine-grained control of what we are doing, and find we need to create many more arrays and index sets in OPL to achieve what we want, while we can achieve the same effect with more intelligent loops etc without creating so many data structures in a language like C#.

Upvotes: 1

Related Questions