Colin Talbert
Colin Talbert

Reputation: 494

'Tiling' a 2d array using numpy

I'm tring to reduce the size of a 2D array by taking the majority of square chunks of the array and writing these to another array. The size of the square chunks is variable, let's say n values on a side. The data type of the array will be an integer. I'm currently using a loop in python to assign each chunk to a temporary array and then pulling the unique values from the tmpArray. I then loop through these and find the one with the most occurances. As you can imagine this process quickly becomes too slow as the input array size increases.

I've seen examples taking the min, max, and mean from my square chunks but I don't know how to convert them to a majority. Grouping 2D numpy array in average and resize with averaging or rebin a numpy 2d array

I'm looking for some means of speeding up this process by using the numpy to perform this process on the entire array. (switching to tiled sections of the array as the input gets too large to fit in memory, I can handle this aspect)

Thanks

#snippet of my code
#pull a tmpArray representing one square chunk of my input array
kernel = sourceDs.GetRasterBand(1).ReadAsArray(int(sourceRow), 
                                    int(sourceCol), 
                                    int(numSourcePerTarget),
                                    int(numSourcePerTarget))
#get a list of the unique values
uniques = np.unique(kernel)
curMajority = -3.40282346639e+038
for val in uniques:
    numOccurances = (array(kernel)==val).sum()
    if numOccurances > curMajority:
        ans = val
        curMajority = numOccurances

#write out our answer
outBand.WriteArray(curMajority, row, col)

#This is insanity!!!

Following the excelent suggestions of Bago I think I'm well on the way to a solution. Here's what I have so far. One change I made was to use a (xy, nn) array from the original grid shape. The problem I'm running into is that I can't seem to figure out how to translate the where, counts, and uniq_a steps from a one dimension to two.

#test data
grid = np.array([[ 37,  1,  4,  4, 6,  6,  7,  7],
                 [ 1,  37,  4,  5, 6,  7,  7,  8],
                 [ 9,  9, 11, 11, 13,  13,  15,  15],
                 [9, 10, 11, 12, 13,  14,  15,  16],
                 [ 17, 17,  19,  19, 21,  11,  23,  23],
                 [ 17, 18,  19,  20, 11,  22,  23,  24],
                 [ 25, 25, 27, 27, 29,  29,  31,  32],
                 [25, 26, 27, 28, 29,  30,  31,  32]])
print grid

n = 4
X, Y = grid.shape
x = X // n
y = Y // n
grid = grid.reshape( (x, n, y, n) )
grid = grid.transpose( [0, 2, 1, 3] )
grid = grid.reshape( (x*y, n*n) )
grid = np.sort(grid)
diff = np.empty((grid.shape[0], grid.shape[1]+1), bool)
diff[:, 0] = True
diff[:, -1] = True
diff[:, 1:-1] = grid[:, 1:] != grid[:, :-1]
where = np.where(diff)

#This is where if falls apart for me as 
#where returns two arrays:
# row indices [0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3]
# col indices [ 0  2  5  6  9 10 13 14 16  0  3  7  8 11 12 15 16  0  3  4  7  8 11 12 15
# 16  0  2  3  4  7  8 11 12 14 16]
#I'm not sure how to get a 
counts = where[:, 1:] - where[:, -1]
argmax = counts[:].argmax()
uniq_a = grid[diff[1:]]
print uniq_a[argmax]

Upvotes: 3

Views: 2525

Answers (2)

Colin Talbert
Colin Talbert

Reputation: 494

It might be a bit of a cop out but I ended up resorting to the scipy.stats.stats mode function to find the majority value. I'm not sure how this compares to other solutions in terms of processing time.

import scipy.stats.stats as stats
#test data
grid = np.array([[ 37,  1,  4,  4, 6,  6,  7,  7],
                 [ 1,  37,  4,  5, 6,  7,  7,  8],
                 [ 9,  9, 11, 11, 13,  13,  15,  15],
                 [9, 10, 11, 12, 13,  14,  15,  16],
                 [ 17, 17,  19,  19, 21,  11,  23,  23],
                 [ 17, 18,  19,  20, 11,  22,  23,  24],
                 [ 25, 25, 27, 27, 29,  29,  31,  32],
                 [25, 26, 27, 28, 29,  30,  31,  32]])
print grid

n = 2
X, Y = grid.shape
x = X // n
y = Y // n
grid = grid.reshape( (x, n, y, n) )
grid = grid.transpose( [0, 2, 1, 3] )
grid = grid.reshape( (x*y, n*n) )
answer =  np.array(stats.mode(grid, 1)[0]).reshape(x, y)

Upvotes: 1

Bi Rico
Bi Rico

Reputation: 25813

Here is a function that will find the majority much more quickly, it's based on the implementation of numpy.unique.

def get_majority(a):
    a = a.ravel()
    a = np.sort(a)
    diff = np.empty(len(a)+1, 'bool')
    diff[0] = True
    diff[-1] = True
    diff[1:-1] = a[1:] != a[:-1]
    where = np.where(diff)[0]
    counts = where[1:] - where[:-1]
    argmax = counts.argmax()
    uniq_a = a[diff[1:]]
    return uniq_a[argmax]

Let me know if that helps.

Update

You can do the following to get your array to be (n*n, x, y), that should set you up to operate on the first axis and get this done in a vectorized way.

X, Y = a.shape
x = X // n
y = Y // n
a = a.reshape( (x, n, y, n) )
a = a.transpose( [1, 3, 0, 2] )
a = a.reshape( (n*n, x, y) )

Just a few things to keep in mind. Even though reshape and transpose return views whenever possible, I believe reshape-transpose-reshape will be forced to copy. Also generalizing the above method to operate on an axis should be possible but might take a little creativity.

Upvotes: 3

Related Questions