Katsu
Katsu

Reputation: 111

Distance objective optimisation

I'm modeling a reoptimisation model and I would like to include a constraint in order to reduce the distance between the initial solution and the reoptimized solution. I'm doing a staff scheduling and to do so I wanna penalized each assignment in the reoptimized solution that is different from the initial solution.

Before I start, I'm new to optimisation model and the way I built the constraint may be wrong.

#1 Extract the data from the initial solution of my main variable

ModelX_DictExtVal = model.x.extract_values()

# 2 Create a new binary variable which activate when the main variable `ModelX_DictExtVal[x,s,d]` of the initial 
#solution is =1 (an employee n works days d and sifht s) and the value of `model.x[n,s,d]` of the reoptimized solution are different.  

model.alpha_distance = Var(model.N_S_D, within=Binary)

#3 Model a constraint to activate my variable.

def constraint_distance(model, n, s, d):
    v = ModelX_DictExtVal[n,s,d]
    if v == 1 and ModelX_DictExtVal[n,s,d] != model.x[n,s,d]:
        return model.alpha_distance[n,s,d] == 1
    elif v == 0:
        return model.alpha_distance[n,s,d] == 0
        
model.constraint_distance = Constraint(model.N_S_D, rule = constraint_distance)

#4 Penalize in my objective function every time the varaible is equal to one

ObjFunction = Objective(expr = sum(model.alpha_distance[n,s,d] * WeightDistance
                        for n in model.N for s in model.S for d in model.D))

Issue: I'm not sure about what I'm doing in part 3 and I get an index error when v == 1.

ERROR: Rule failed when generating expression for constraint
    constraint_distance with index (0, 'E', 6): ValueError: Constraint
    'constraint_distance[0,E,6]': rule returned None

I am wondering since I am reusing the same model for re-optimization if the model keeps the value of the initial solution of model.x [n, s, d] to do the comparison ModelX_DictExtVal [n, s, d]! = model.x [n, s, d] during the re-optimization phase instead of the new assignments...

Upvotes: 0

Views: 105

Answers (1)

AirSquid
AirSquid

Reputation: 11883

You are right to suspect part 3. :)

So you have some "initial values" that could be either the original schedule (before optimizing) or some other preliminary optimization. And your decision variable is binary, indexed by [n,s,d] if I understand your question.

In your constraint you cannot employ an if-else structure based on a comparison test of your decision variable. The value of that variable is unknown at the time the constraint is built, right?

You are on the right track, though. So, what you really want to do is to have your alpha_distance (or penalty) variable capture any changes, indicating 1 where there is a change. That is an absolute value operation, but can be captured with 2 constraints. Consider (in pseudocode):

penalty = |x.new - x.old|    # is what you want

So introduce 2 constraints, (indexed fully by [n,s,d]):

penalty >= x.new - x.old
penalty >= x.old - x.new

Then, as you are doing now, include the penalty in your objective, optionally multiplied by a weight.

Comment back if that doesn't make sense...

Upvotes: 1

Related Questions