Reputation: 165
I recently stumbled upon a curious issue concerning the scipy.special.legendre()
(scipy documentation). The legendre polynomials should be pairwise orthogonal. However, when I calculate them over a range x=[-1,1]
and build the scalar product of two polynomials of different degree I don't always get zero or values close to zero. Do I misinterpret the functions behaviour?
In the following I have written a short example, which produces the scalar product of certain pairs of legendre polynomials:
from __future__ import print_function, division
import numpy as np
from scipy import special
import matplotlib.pyplot as plt
# create range for evaluation
x = np.linspace(-1,1, 500)
degrees = 6
lp_array = np.empty((degrees, len(x)))
for n in np.arange(degrees):
LP = special.legendre(n)(x)
# alternatively:
# LP = special.eval_legendre(n, x)
lp_array[n, ] = LP
plt.plot(x, LP, label=r"$P_{}(x)$".format(n))
plt.grid()
plt.gca().set_ylim([-1.1, 1.1])
plt.legend(fontsize=9, loc="lower right")
plt.show()
The plot of the single polynomials actually looks fine:
But if I calculate the scalar products manually – multiply two legendre polynomials of different degree elementwise and sum them up (the 500 is for normalization)...
for i in range(degrees):
print("0vs{}: {:+.6e}".format(i, sum(lp_array[0]*lp_array[i])/500))
... I get the following values as output:
0vs0: +1.000000e+00
0vs1: -5.906386e-17
0vs2: +2.004008e-03
0vs3: -9.903189e-17
0vs4: +2.013360e-03
0vs5: -1.367795e-16
The scalar product of the first polynomial with itself is (as to be expected) equal one, and half of the other results are almost zero, but there are some values in the order of 10e-3
and I have no idea why. I also tried the scipy.special.eval_legendre(n, x)
function – same result :-\
Is this a bug in the scipy.special.legendre()
function? Or do I do something wrong? I am looking for constructive responds :-)
cheers, Markus
Upvotes: 3
Views: 1613
Reputation: 7222
As others have commented, you're going to get some error, since you're performing an in-exact integral.
But you can reduce the error by doing the integral as best you can. In your case, you can still improve your sampling points to make the integral more exact. When sampling, use the "midpoint" of the intervals instead of the edges:
x = np.linspace(-1, 1, nx, endpoint=False)
x += 1 / nx # I'm adding half a sampling interval
# Equivalent to x += (x[1] - x[0]) / 2
This gives quite a lot of improvement! If I use the old sampling method:
nx = 500
x = np.linspace(-1, 1, nx)
degrees = 7
lp_array = np.empty((degrees, len(x)))
for n in np.arange(degrees):
LP = special.eval_legendre(n, x)
lp_array[n, :] = LP
np.set_printoptions(linewidth=120, precision=1)
prod = np.dot(lp_array, lp_array.T) / x.size
print(prod)
This gives:
[[ 1.0e+00 -5.7e-17 2.0e-03 -8.5e-17 2.0e-03 -1.5e-16 2.0e-03]
[ -5.7e-17 3.3e-01 -4.3e-17 2.0e-03 -1.0e-16 2.0e-03 -1.1e-16]
[ 2.0e-03 -4.3e-17 2.0e-01 -1.3e-16 2.0e-03 -1.0e-16 2.0e-03]
[ -8.5e-17 2.0e-03 -1.3e-16 1.4e-01 -1.2e-16 2.0e-03 -1.0e-16]
[ 2.0e-03 -1.0e-16 2.0e-03 -1.2e-16 1.1e-01 -9.6e-17 2.0e-03]
[ -1.5e-16 2.0e-03 -1.0e-16 2.0e-03 -9.6e-17 9.3e-02 -1.1e-16]
[ 2.0e-03 -1.1e-16 2.0e-03 -1.0e-16 2.0e-03 -1.1e-16 7.9e-02]]
Error terms are ~10^-3.
But using the "midpoint sampling scheme", I get:
[[ 1.0e+00 -2.8e-17 -2.0e-06 -3.6e-18 -6.7e-06 -8.2e-17 -1.4e-05]
[ -2.8e-17 3.3e-01 -2.8e-17 -4.7e-06 -2.7e-17 -1.1e-05 -4.1e-17]
[ -2.0e-06 -2.8e-17 2.0e-01 -5.7e-17 -8.7e-06 -2.3e-17 -1.6e-05]
[ -3.6e-18 -4.7e-06 -5.7e-17 1.4e-01 -2.1e-17 -1.4e-05 -5.3e-18]
[ -6.7e-06 -2.7e-17 -8.7e-06 -2.1e-17 1.1e-01 1.1e-17 -2.1e-05]
[ -8.2e-17 -1.1e-05 -2.3e-17 -1.4e-05 1.1e-17 9.1e-02 7.1e-18]
[ -1.4e-05 -4.1e-17 -1.6e-05 -5.3e-18 -2.1e-05 7.1e-18 7.7e-02]]
Errors are now ~10^-5 or even 10^-6, which is much better!
Upvotes: 1