Reputation: 1
I want to solve numerically the equation that you can see on the "code" picture. The result that you can observe in "result" picture are not correct as you can see on the plot joined to this question.
What is wrong in my "fsolve" use?
Upvotes: 0
Views: 66
Reputation: 15221
It's important that each solve section either be given bounds, or at least a reliable initial estimate and gradient. fsolve
doesn't support bounds. There are many ways to do this; I demonstrate root_scalar
coupled with an analytic gradient and reasonable initial estimates:
import numpy as np
from scipy.optimize import root_scalar, check_grad
def fun(x: float) -> float:
return np.tan(L*x) + x/h
def grad(x: float) -> float:
return L/np.cos(L*x)**2 + 1/h
L = 3
h = 0.5
x0 = 0
error = check_grad(func=fun, grad=grad, x0=1)
assert np.isclose(error, 0, rtol=0, atol=1e-6)
for n in range(10):
res = root_scalar(f=fun, fprime=grad, x0=x0)
assert res.converged
x = res.root
y = -x/h
assert np.isclose(np.tan(L*x), y, rtol=0, atol=1e-12)
print(
f'n={n} '
f'x0={x0:.3f} '
f'x={x:.3f} '
f'y={y:7.3f} '
f'niter={res.iterations}'
)
x0 = x + np.pi/L
n=0 x0=0.000 x=0.000 y= -0.000 niter=0
n=1 x0=1.047 x=0.725 y= -1.450 niter=7
n=2 x0=1.772 x=1.668 y= -3.336 niter=9
n=3 x0=2.715 x=2.679 y= -5.359 niter=6
n=4 x0=3.727 x=3.710 y= -7.420 niter=5
n=5 x0=4.757 x=4.747 y= -9.495 niter=5
n=6 x0=5.795 x=5.788 y=-11.577 niter=5
n=7 x0=6.836 x=6.831 y=-13.662 niter=5
n=8 x0=7.878 x=7.875 y=-15.750 niter=4
n=9 x0=8.922 x=8.920 y=-17.840 niter=4
Providing a second-order gradient will use Halley's method and converge more quickly:
import numpy as np
from scipy.optimize import check_grad, root_scalar
def fun(x: float) -> float:
return np.tan(L*x) + x/h
def grad(x: float) -> float:
return L/np.cos(L*x)**2 + 1/h
def grad2(x: float) -> float:
return 2 * np.tan(L*x) * (
L/np.cos(L*x)
)**2
L = 3
h = 0.5
x0 = 0
error = check_grad(func=fun, grad=grad, x0=1)
assert np.isclose(error, 0, rtol=0, atol=1e-6)
error = check_grad(func=grad, grad=grad2, x0=1)
assert np.isclose(error, 0, rtol=0, atol=1e-6)
for n in range(10):
res = root_scalar(f=fun, fprime=grad, fprime2=grad2, x0=x0)
assert res.converged
x = res.root
y = -x/h
assert np.isclose(np.tan(L*x), y, rtol=0, atol=1e-12)
print(
f'n={n} '
f'x0={x0:.3f} '
f'x={x:.3f} '
f'y={y:7.3f} '
f'niter={res.iterations}'
)
x0 = x + np.pi/L
n=0 x0=0.000 x=0.000 y= -0.000 niter=0
n=1 x0=1.047 x=0.725 y= -1.450 niter=5
n=2 x0=1.772 x=1.668 y= -3.336 niter=4
n=3 x0=2.715 x=2.679 y= -5.359 niter=3
n=4 x0=3.727 x=3.710 y= -7.420 niter=3
n=5 x0=4.757 x=4.747 y= -9.495 niter=3
n=6 x0=5.795 x=5.788 y=-11.577 niter=3
n=7 x0=6.836 x=6.831 y=-13.662 niter=3
n=8 x0=7.878 x=7.875 y=-15.750 niter=3
n=9 x0=8.922 x=8.920 y=-17.840 niter=3
For cases where the gradients are ambiguous, use brackets instead:
import numpy as np
from scipy.optimize import root_scalar
def fun(x: float) -> float:
return np.tan(L*x) + k*x/h
L = 0.4
k = 6.8
h = 10
x0 = 0
epsilon = 1e-6
for n in range(10):
res = root_scalar(
f=fun,
x0=x0,
bracket=(
(n - 0.5)*np.pi/L + epsilon,
x0,
),
)
assert res.converged
x = res.root
y = -k*x/h
err = np.tan(L*x) - y
assert np.isclose(err, 0, rtol=0, atol=1e-9)
print(
f'n={n} '
f'x0={x0:6.3f} '
f'x={x:6.3f} '
f'y={y:7.3f} '
f'err={err:8.1e} '
f'niter={res.iterations}'
)
x0 = x + np.pi/L
n=0 x0= 0.000 x= 0.000 y= -0.000 err= 0.0e+00 niter=0
n=1 x0= 7.854 x= 4.687 y= -3.187 err=-7.1e-13 niter=11
n=2 x0=12.541 x=12.084 y= -8.217 err= 4.6e-14 niter=11
n=3 x0=19.938 x=19.820 y=-13.478 err=-6.8e-14 niter=10
n=4 x0=27.674 x=27.622 y=-18.783 err=-2.7e-13 niter=10
n=5 x0=35.476 x=35.447 y=-24.104 err= 1.2e-12 niter=9
n=6 x0=43.301 x=43.282 y=-29.432 err= 5.4e-13 niter=9
n=7 x0=51.136 x=51.123 y=-34.763 err= 2.5e-10 niter=8
n=8 x0=58.977 x=58.967 y=-40.098 err= 2.2e-11 niter=8
n=9 x0=66.821 x=66.814 y=-45.433 err= 8.3e-12 niter=8
Upvotes: 0