Reputation: 121
I am trying to solve a linear programming problem with the following constraints:
for some given values of
N
and T
. So suppose N={1,2}
and T={1,2}
It is easy to write out for small |N|, but is impossible to write as N becomes large.
Im confused as to how to actually code these sums by indexing on my objective variables.
I used the following to create my objective variables:
for t in range(1,T+1):
for i in range(1,n+1):
for j in range(1,n+1):
# Skip the x_i,i,t entries
if j == i:
continue
element = "x"+str(i)+','+str(j)+','+str(t)
x_ijt_holding.append(element)
x_ijt =[]
for i in x_ijt_holding:
x_ijt.append(pulp.LpVariable(i, cat = "Binary"))
Initially I thought I could just define each constraint with LpVariable, but I realized the solver doesn't like that.
For example I did:
# Constraint 2 enter entries
x_ijt_2_holding = []
for j in range(1,n+1):
for i in range(1,j):
equations_2 = []
equations_3 = []
for t in range(1, T+1):
equations_2.append("x"+str(i)+','+str(j)+','+str(t))
equations_2.append("x"+str(j)+','+str(i)+','+str(t))
x_ijt_2_holding.append(equations_2)
# Constraint 2 as LpVariable:
for i in x_ijt_2_holding:
temp = []
for sublist in i:
temp.append(pulp.LpVariable(sublist, cat = "Binary"))
x_ijt_con2.append(temp)
So how would I then code the constraints into the problem?
Upvotes: 0
Views: 3135
Reputation: 1969
Constraints defines relationships on previously defined variable written as equations. When you are building your constraints you need to reference the variables you defined in your first part. Saving your variables in dicts makes it much easier to reference your variables later.
Take a look at this
"""
Optimizing
sum(x[i][j][t] + x[j][i][t]) ==1 where j in N \ {i} for each i in N and for each t in T
sum(x[i][j][t] + x[j][i][t]) ==2 where t in T for each i,j in N, i < j
programmer Michael Gibbs
"""
import pulp
N = [1,2]
T = [1,2]
model = pulp.LpProblem("basis")
# variables
# x[i][j][t]
x = {
i:{
j:{
t:pulp.LpVariable(
'x_' + str(i) + '_' + str(j) + '_' + str(t),
cat=pulp.LpBinary
)
for t in T
}
for j in N if j != i
}
for i in N
}
# constraints
#sum(x[i][j][t] + x[j][i][t]) ==1 where j in N \ {i} for each i in N and for each t in T
for i in N:
for t in T:
c = pulp.lpSum([x[i][j][t] + x[j][i][t] for j in N if j != i]) == 1
model += c
#sum(x[i][j][t] + x[j][i][t]) ==2 where t in T for each i,j in N, i < j
for i in N:
for j in N:
if i < j:
c = pulp.lpSum([x[i][j][t] + x[j][i][t] for t in T]) == 2
model += c
# no objective
model.solve()
print("i","j","t","value")
print('-------------------')
[print(i,j,t, pulp.value(x[i][j][t])) for i in N for j in N if i != j for t in T]
Upvotes: 1