Hummels420
Hummels420

Reputation: 21

scipy.optimize.linprog not finding a solution even when one exists

I am trying the following code with Python 2.7.12 and Numpy 1.11.0.

import numpy as np
from scipy.optimize import linprog

A = np.matrix([[1,0,0,1], [0, 1, 0 ,1], [0, 1, 1, 0]])
c = np.zeros(A.shape[1])
res = linprog(c,A_eq = A, b_eq = [0,1,-1], options=dict(bland=True))
print (res.x)
print (A.dot([0, 1, -2, 0]))

The output of the above is

nan
[[ 0  1 -1]]

So scipy.optimize.linprog does not find a solution even when one exists as evident by output of the dot multiplication of A_eq with [0, 1, -2, 0].

A similar question was asked here and I tried the solutions suggested there (i.e. adding options=dict(bland=True) or updating the tolerance value). I still get the same erroneous output as posted above. What could be the reason for this behaviour? Thank you.

Upvotes: 1

Views: 358

Answers (1)

Hummels420
Hummels420

Reputation: 21

I am the OP and the solution was adding the bounds explicitly as follows:

 res = linprog(c, A_eq = A, b_eq = [0,1,-1], bounds=(None, None))

I was under the impression that by default, linprog assumes no bounds on the solution but in fact the default bounds are (0, None) i.e. non-negative values.

This is mentioned in the docs:

bounds : sequence, optional

(min, max) pairs for each element in x, defining the bounds on that parameter. Use None for one of min or max when there is no bound in that direction. By default bounds are (0, None) (non-negative) ...

Upvotes: 1

Related Questions