Emily
Emily

Reputation: 865

Iterating over multidimensional Numpy array

What is the fastest way to iterate over all elements in a 3D NumPy array? If array.shape = (r,c,z), there must be something faster than this:

x = np.asarray(range(12)).reshape((1,4,3))

#function that sums nearest neighbor values
x = np.asarray(range(12)).reshape((1, 4,3))

#e is my element location, d is the distance
def nn(arr, e, d=1):
    d = e[0]
    r = e[1]
    c = e[2]
    return sum(arr[d,r-1,c-1:c+2]) + sum(arr[d,r+1, c-1:c+2]) + sum(arr[d,r,c-1]) + sum(arr[d,r,c+1]) 

Instead of creating a nested for loop like the one below to create my values of e to run the function nn for each pixel :

for dim in range(z):
    for row in range(r):
        for col in range(c):
            e = (dim, row, col)  

I'd like to vectorize my nn function in a way that extracts location information for each element (e = (0,1,1) for example) and iterates over ALL elements in my matrix without having to manually input each locational value of e OR creating a messy nested for loop. I'm not sure how to apply np.vectorize to this problem. Thanks!

Upvotes: 3

Views: 1676

Answers (3)

sds
sds

Reputation: 60074

What you are looking for is probably array.nditer:

a = np.arange(6).reshape(2,3)
for x in np.nditer(a):
    print(x, end=' ')

which prints

0 1 2 3 4 5

Upvotes: 0

Emily
Emily

Reputation: 865

Here's what I ended up doing. Since I'm returning the xv vector and slipping it in to the larger 3D array lag, this should speed up the process, right? data is my input dataset.

def nn3d(arr, e):
    r,c = e

    n = np.copy(arr[:,r-1:r+2,c-1:c+2])
    n[:,1,1] = 0

    n3d = np.ma.masked_where(n == nodata, n)

    xv = np.zeros(arr.shape[0])
    for d in range(arr.shape[0]):
        if np.ma.count(n3d[d,:,:]) < 2:
            element = nodata
        else:
            element = np.sum(n3d[d,:,:])/(np.ma.count(n3d[d,:,:])-1)
        xv[d] = element

    return xv

lag = np.zeros(shape = data.shape)        
for r in range(1,data.shape[1]-1): #boundary effects
    for c in range(1,data.shape[2]-1):
        lag[:,r,c] = nn3d(data,(r,c)) 

Upvotes: 0

hpaulj
hpaulj

Reputation: 231738

It is easy to vectorize over the d dimension:

def nn(arr, e):
        r,c = e  # (e[0],e[1])
        return np.sum(arr[:,r-1,c-1:c+2],axis=2) + np.sum(arr[:,r+1,c-1:c+2],axis=2) + 
            np.sum(arr[:,r,c-1],axis=?) + np.sum(arr[:,r,c+1],axis=?)

now just iterate over the row and col dimensions, returning a vector, that is assigned to the appropriate slot in x.

for row in <correct range>:
    for col in <correct range>:
        x[:,row,col] = nn(data, (row,col))

The next step is to make

rows = [:,None] cols = arr[:,rows-1,cols+2] + arr[:,rows,cols+2] etc.

This kind of problem has come up many times, with various descriptions - convolution, smoothing, filtering etc.

We could do some searches to find the best, or it you prefer, we could guide you through the steps.

Converting a nested loop calculation to Numpy for speedup

is a question similar to yours. There's only 2 levels of looping, and sum expression is different, but I think it has the same issues:

for h in xrange(1, height-1):
   for w in xrange(1, width-1):
      new_gr[h][w] = gr[h][w] + gr[h][w-1] + gr[h-1][w] +
               t * gr[h+1][w-1]-2 * (gr[h][w-1] + t * gr[h-1][w])

Upvotes: 1

Related Questions