Reputation: 13
I'm currently trying to solve a matrix equation of the form Ax = b where A is an NxN square matrix and x,b are 1xN vectors. However, I require that all the elements x[i] be non-negative.
Imposing this constraint means that a solution may well not be possible (if you're doing this analytically there is only 1 unique solution, which if it has negative entries puts me out of luck) but there must be a way to find an x that gets as close as possible? I'm unsure what this is formally called so I've had little luck searching.
What I have so far:
N=10
A = np.random.rand(N,N)
B = np.random.rand(N)
A_inv = np.linalg.inv(A)
x = np.dot( A_inv, B )
x=
array([ 0.42216451, 1.70270083, -1.54040488, 2.18724233, 2.04278932, -1.76074253])
Any help would be massively appreciated. Thanks!
Upvotes: 1
Views: 1068
Reputation: 33532
Your problem is called non-negative least-squares and scipy supports it.
Without testing, usage would look like:
import numpy as np
from scipy.optimize import nnls
N=10
A = np.random.rand(N,N)
B = np.random.rand(N)
x, rnorm = nnls(A, B)
The algorithm is quite old and highly trusted, but only works for small-to-medium scale problems (as it calculates A.T * A
internally). If you got huge problems (sparse problem with millions of variables), you should try some custom-formulation based on L-BFGS-B or better: a custom (iterative) algorithm.
Upvotes: 3