TobSta
TobSta

Reputation: 786

scipy minimize inequality constraint function

I need to constrain my loss, so that the prediction is always positive. So I have:

x = [1.0,0.64,0.36,0.3,0.2]
y = [1.0,0.5,0.4,-0.1,-0.2]
alpha = 0

def loss(w, x, y, alpha):
    loss = 0.0
    for y_i,x_i in zip(y,x):
        loss += ((y_i - np.dot(w,x_i)) ** 2)
    return loss + alpha * math.sqrt(np.dot(w,w))

res = minimize(loss_new_scipy, 0.0, args=(x, y, alpha))

Now i want to add the constraints, but i found mostly constraints that x is in between bounds, not np.dot(w,x)>= 0 How would such a constraint look like?

EDIT: I want to use the constraints parameter in the scipy.optimize.minimize function, so I think it should look somehow like this:

def con(w,x):
    loss = 0.0
    for i_x in x:
         loss += (np.dot(w, i_x))
    return loss


cons = ({'type': 'ineq', 'fun': con})
res = minimize(loss_new_scipy, 0.0, args=(x, y, alpha), constraints=cons)

also i removed the second constraint for simplicity

EDIT2: I changed my problem to the following: constraint is w*x has to be greater than 1, and also changed the targets to all negatives. I also changed the args, so it runs now:

x = np.array([1.0,0.64,0.36,0.3,0.2])
y = [-1.0,-0.5,-0.4,-0.1,-0.2]
alpha = 0

def con(w,x,y,alpha):
    print np.array(w*x)
    return np.array((w*x)-1).sum()


cons = ({'type': 'ineq', 'fun': con,'args':(x,y,alpha)})

def loss_new_scipy(w, x, y, alpha):
    loss = 0.0
    for y_i,x_i in zip(y,x):
        loss += ((y_i - np.dot(w,x_i)) ** 2)
    return loss + alpha * math.sqrt(np.dot(w,w))

res = minimize(loss_new_scipy, np.array([1.0]), args=(x, y, alpha),constraints=cons)
print res

But unfortunately the result for w is 2.0, which indeed is positive and looks like the constraint helped, since it is far away from fitting the function to the targets, but the predictions w*x are not all above 1.0

EDIT3: i just realized that the sum of my predictions - 1 is equal to 0 now, but i want each prediction to be greater than 1.0 So with w = 2.0,

w*x = [ 2.00000001  1.28000001  0.72        0.6         0.4       ] 

and

(w*x) - 1 = [ 1.00000001  0.28000001 -0.28       -0.4        -0.6       ]

which sum is equal to 0.0, but i want all predictions w*x to be greater than 1.0, so all 5 values in w*x should be at least 1.0

Upvotes: 1

Views: 3609

Answers (1)

ewcz
ewcz

Reputation: 13087

If I understand your EDIT2 correctly, you are trying to minimize |y - w*x|^2 as a function of a real parameter w (where x and y are vectors) with the constraint that w*x has all components larger than 1.

Now, the expression |y - w*x|^2 is quadratic in w so it has a well defined global minimum (the factor in front of w^2 is positive). However, the constraint on the components of w*x effectively imposes a minimum admissible value of w (since x is fixed), which is in this case 5. Since the global minimum of the quadratic (unconstrained) function |y - w*x|^2 is for your particular case around np.dot(y,x)/np.dot(x,x)=-0.919, the function is monotonically increasing for w>=5, thus the value of 5 represents the constrained minimum...

To get this answer with your code, one has to fix the constraint. In your case, you are summing all the components of w*x shifted by 1. Here, it could however happen that one particular component is much larger than 1 and therefore its contribution into the sum could mask other components which are only slightly smaller than 1 (for example if x=[2, 0.25], w=2, then w*x-1=[3,-0.5] and the sum is therefore positive even though the constraint is violated). To rectify this, one could sum only those components of w*x-1 which are negative, i.e., those which violate the constraint:

def con(w,x,y,alpha):
    return np.minimum(w*x - 1, 0).sum()

Upvotes: 3

Related Questions