Reputation: 134
I usually rely on Wolfram Mathematica for this kind of thing, but I've been delving into Python recently and the results are so much better. Basically, I'm looking for numerical solutions for systems like the following.
Well, I know that there are solutions, because Wolfram Mathematica found a single one (0.0858875,0.0116077,-0.156661,1.15917). What I tried to do in Python is this brute force code.
import numpy as np
START = -3
END = 3
STEP = 0.1
for r0 in np.arange(START, END, STEP):
for r1 in np.arange(START, END, STEP):
for r2 in np.arange(START, END, STEP):
for r3 in np.arange(START, END, STEP):
eq0 = r0*r2+r1*r3
eq1 = r0*r1+r1*r2+r2*r3+r0*r3
eq2 = r0**2+r1**2+r2**2+r3**2-4*(r0+r1+r2+r3)**2
if (eq0 == 0 and eq1 < 0 and eq2 < 0):
print(r0, r1, r2, r3)
Edit: I'm okay with things like -0.00001< eq0 < 0.00001 instead of eq1 == 0
Well, although it didn't find solutions in this case, the brute force method went well for other systems I'm dealing with, particularly when there are fewer equations and variables. Starting with four variables, it becomes really difficult.
I'm sorry if I'm asking too much. I'm completely new to Python, so I also don't know if this is actually trivial. Maybe fsolve would be useful? I'm not sure if it works with inequalities. Also, even when the systems I encounter have only equalities, they always have more variables than equations, like this one:
system2:
.
Hence 'fsolve' is not appropriate, right?
Upvotes: 1
Views: 713
Reputation: 7157
As soon as your system contains inequalities, you need to formulate it as an optimization problem and solve it with scipy.optimize.minimize
. Otherwise, you can use scipy.optimize.root
or scipy.optimize.fsolve
to solve an equation system. Note that the former is also exactly what is done behind the scenes in root
and fsolve
, i.e. both solve a least-squares optimization problem.
In general, the problem
g_1(x) = 0, ..., g_m(x) = 0
h_1(x) < 0, ..., h_p(x) < 0
can be formulated as
min g_1(x)**2 + ... + g_m(x)**2
s.t. -1.0*(h_1(x) + eps) >= 0
.
.
-1.0*(h_p(x) + eps) >= 0
where eps
is a tolerance to model the strict inequality.
Hence, you can solve your first problem as follows:
import numpy as np
from scipy.optimize import minimize
def obj(r):
return (r[0]*r[2]+r[1]*r[3])**2
eps = 1.0e-6
constrs = [
{'type': 'ineq', 'fun': lambda r: -1.0*(r[0]*r[1] + r[1]*r[2] + r[2]*r[3] + r[0]*r[3] + eps)},
{'type': 'ineq', 'fun': lambda r: -1.0*(np.sum(r**2) - 4*(np.sum(r))**2 + eps)}
]
# res.x contains the solution
res = minimize(obj, x0=np.ones(4), constraints=constrs)
Your second problem can be solved similarly. Here, you only need to remove the constraints. Alternatively, you can use root
where it's worth mentioning that it solves F(x) = 0
for a function F: R^N -> R^N
, i.e. a function of N
variables that returns an N
dimensional vector. In case your function has fewer equations than variables, you can simply fill up the vector with zeros:
import numpy as np
from scipy.optimize import root
def F(r):
vals = np.zeros(r.size)
vals[0] = np.dot(r[:5], r[1:]) + r[0]*r[5]
vals[1] = r[0]*r[3] + r[1]*r[4] + r[2]*r[5]
vals[2] = np.sum(r**2) - 3*np.sum(r)**2
return vals
# res.x contains your solution
res = root(F, x0=np.ones(6))
Upvotes: 3
Reputation: 276
Not sure if your problem is a root finding problem or a minimization with constraints problem, but take a look at scipy.optimize and scipy.linprog, maybe one of those methods can be bent to your application.
Upvotes: 1
Reputation: 23584
Not really an answer, but you can simplify this a lot using product
from itertools import product
import numpy as np
START = -3
END = 3
STEP = 0.1
for r0, r1, r2, r3 in product(np.arange(START, END, STEP), repeat=4):
print(r0, r1, r2, r3)
Upvotes: 1