pretzlstyle
pretzlstyle

Reputation: 2962

Optimizing NumPy finite differences via chain rule

Consider the following code:

x = np.array([1, 5, 6, 10])  # an unstructured coordinate
f = x**2                     # function value on the points x
grad1 = np.gradient(f, x)                # df/dx
grad2 = np.gradient(f) / np.gradient(x)  # df/di * di/dx = df/dx

I would have expected that, by chain rule, grad1=grad2. The i in the comment above is simply a uniform "index"After testing, this equality is true for simple linear functions, but not e.g. x**2 as shown above. I'm now wondering if there is a theoretical reason why the chain rule shouldn't hold in general for derivatives estimated by finite differences.

I think the problem lies in the follow observation: np.gradient does not, in general, assume the input coordinates x to be uniform. But I think this expression of the chain rule does, which I suspect is implicit in the call np.gradient(x). When we call np.gradient(f, x) with nonuniform x, we are really performing an interpolation for each interior point, rather than a true centered-difference...

Upvotes: 0

Views: 95

Answers (1)

jared
jared

Reputation: 9096

No, I would not expect grad1 to equal grad2 for non-uniform spacing. The second-order central difference equation for the first derivative with non-uniform spacing is shown in the np.gradient documentation. When you don't specify the spacing, numpy assumes a constant spacing (as you've said) and makes use of the simplified formula that goes with that assumption, i.e. f' = (f[1:] - f[:-1])/(2h) (ignoring the endpoints). When you attempt to perform the derivative as you have done, you end up with f' = (f[1:] - f[:-1])/(x[1:] - x[:-1]) (again ignoring the endpoints). This is not equivalent to the non-uniform equation since the weights are incorrect. Therefore, your results are wrong.

Upvotes: 0

Related Questions