Reputation: 2962
I have a 3D array of data A
, with shape (NX, NY, NZ)
in the x
, y
, and z
dimensions, respectively.
I want to find the gradient of A
in the y
dimension. This can be done easily with NumPy:
dAdy = np.gradient(A, Y, axis=1)
where Y
is a 1D vector of coordinates in the y
dimension.
However, this becomes nontrivial if Y
is unstructured. That is, every "column" of data at fixed positions (x, z) = (Xi, Zi)
has a unique set of y
coordinates. For example:
A = np.random.random((10, 10, 10))
X = np.arange(10)
Y = np.sort(np.random.random((10, 10, 10)), axis=1)
Z = np.arange(10)
The result above is a 3D dataset A
, defined on a structured set of X
and Z
coordinates, while the value of the Y
coordinate is unique for every data point (but is of course monotonic in the y
dimension). I want to estimate dA/dy
via finite differences.
Essentially, I'm trying to take the gradient of many independent columns. Is there a way to vectorize this with NumPy? I tried the following iterative approach, but it's very slow:
# A is the 3D dataset
# Y is the 3D dataset with shape matching that of A; gives the y-position of each datapoint in A
NX, NY, NZ = A.shape[0], A.shape[1], A.shape[2]
dA_dy = np.zeros((NX, NY, NZ))
for i in range(NX):
for k in range(NZ):
dA_dy[i, :, k] = np.gradient(A[i,:,k], Y[i,:,k])
I also thought that I could get smart by implementing the chain rule:
dA_dy = np.gradient(A, axis=1) / np.gradient(Y, axis=1)
But for the following simple test, this approach does not work:
g = np.array([1, 5, 6, 10]) # an unstructured coordinate
f = g**2 # function value on the points x
grad1 = np.gradient(f, g) # df/dg
grad2 = np.gradient(f) / np.gradient(g) # df/dg?
I only get grad1=grad2
for a few simple linear functions, but not the function represented above. I'm now wondering if there is a theoretical reason why the chain rule shouldn't hold in general for derivatives estimated by finite differences.
Upvotes: 0
Views: 285
Reputation: 120539
(Not an answer to solve the problem)
I only get grad1=grad2 for a few simple linear functions
For sure:
# np.gradient(f) is equivalent to:
>>> np.gradient(f, np.arange(f.size))
array([24. , 17.5, 37.5, 64. ])
# np.gradient(x) is equivalent to:
>>> np.gradient(x, np.arange(x.size))
array([4. , 2.5, 2.5, 4. ])
# so np.gradient(f) / np.gradient(x) is equivalent to:
>>> np.gradient(f, np.arange(f.size)) / np.gradient(x, np.arange(f.size))
array([ 6., 7., 15., 16.])
If x
is evenly spaced, grad1
is equal to grad2
even if the function f
is not linear:
x = np.array([1, 3, 5, 7])
f = x**2
grad1 = np.gradient(f, x)
grad2 = np.gradient(f) / np.gradient(x)
Output:
>>> grad1 == grad2
array([ True, True, True, True])
Upvotes: 0