Reputation: 365
I am trying to solve an equation similar to the simplified example
x / x.sum - b = 0
where x is an n-dimensional vector. Since one can multiply x with any constant without changing the equation, the solution is up to a normalization. Because of this, I try to add an n+1-th equation, such that
x.sum() - 1 = 0
My attempts to put this in code have all produced errors. This is the most recent minimal example:
import numpy as np
from scipy.optimize import fsolve
n = 1000
a = np.ones(n)
b = np.random.rand(n)
def f(x):
return (x / (x.sum() + 1) - b, x.sum() - 1)
x = fsolve(f, a)
print(x)
Is this possible with fsolve? What is the correct code?
Further context: The function is a simplified example. The actual function that I attempt to solve is non-linear and complicated. I can proof that a solution for x exists and that it is unique up to scaling.
Upvotes: 0
Views: 355
Reputation: 8277
I suggest you rephrase your problem as an optimization problem where constraints are naturally accounted. Here using https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html
import numpy as np
from scipy.optimize import minimize
np.random.seed(10)
n = 1000
a = np.ones(n)
b = np.random.rand(n)
a /= a.sum() #added to speed up the optimization
b /= b.sum() #added to sanity check the solution - not needed
def opt(x):
return ((x/x.sum()-b)**2).sum() #replaced mean by sum for better scaling with n
def norm_constraint(x):
return x.sum() - 1
x = minimize(opt, x0=a, constraints={'type': 'eq', 'fun': norm_constraint}, tol=1e-10).x # passing tol for optimization to not terminate too early
print( np.sum(x), np.max(np.abs(x-b)))
# 1.0 1.5460126668448426e-12
Edit, here is how you can force entries of x to be positive:
import numpy as np
from scipy.optimize import minimize
from functools import partial
np.random.seed(10)
n = 1000
a = np.ones(n)
b = np.random.rand(n)
a /= a.sum() #added to speed up the optimization
def opt(x):
return ((x/x.sum()-b)**2).sum() #replaced mean by sum for better scaling with n
def norm_constraint(x):
return x.sum() - 1
constaints=[{'type': 'eq', 'fun': norm_constraint}]
def pos_constraint_maker(x,idx):
return x[idx]
for idx in range(n):
constaints.append({'type': 'ineq', 'fun': partial(pos_constraint_maker, idx=idx)})
x = minimize(opt, x0=a, constraints=constaints, tol=1e-10).x # passing tol for optimization to not terminate too early
print( np.sum(x), np.max(np.abs(x-b)))
# 1.0 0.9536785254041191
np.max( np.abs( x[x<0])), np.sum(np.round(x, 10) < 0)
# 8.525672691589617e-14, 0
# x does have some negative entries, but their magnitude is very low (=1e-13), and rounding can alleviate that
Upvotes: 1
Reputation: 15233
The first method (and probably the one most likely to perform well) is to reduce the number of degrees of freedom of your decision variables by one, and calculate the last entry to guarantee a sum of 1. Note that this will not converge for your bogus b
because you haven't also normalized it to 1.
import numpy as np
from numpy.random._generator import default_rng
from scipy.optimize import minimize
n = 10 # 1000
rand = default_rng(seed=0)
b = np.random.rand(n)
b /= sum(b)
def f(xpartial: np.ndarray) -> np.ndarray:
x = np.concatenate((xpartial, (1 - xpartial.sum(),)))
error = x - b
return error.dot(error)
result = minimize(
fun=f,
x0=np.full(n-1, fill_value=1/n),
)
assert result.success, result.message
print(result)
print('Compare:')
print(result.x)
print(b)
The second method is to set a linear constraint (do not use a generic nonlinear constraint if your constraint function is linear). This is a modification of @learning-is-a-mess's solution that, in theory, should provide marginal improvement to numerical performance.
import numpy as np
from numpy.random._generator import default_rng
from scipy.optimize import minimize, LinearConstraint
n = 10 # 1000
rand = default_rng(seed=0)
b = np.random.rand(n)
b /= sum(b)
def f(x: np.ndarray) -> np.ndarray:
error = x - b
return error.dot(error)
result = minimize(
fun=f,
x0=np.full(n, fill_value=1/n),
constraints=LinearConstraint(A=np.ones_like(b), lb=1, ub=1),
)
assert result.success, result.message
print(result)
print('Compare:')
print(result.x)
print(b)
Upvotes: 1