Dair
Dair

Reputation: 16250

Is there anyway to make this numpy array operation faster?

Suppose I have a matrix:

A = [[2, 1]
     [1, 2]]

And a list of matrices:

B = [[1, 0]   C = [[2, 1],  D = [[0, 0],  E = [[1, 0],
     [1, 0]]       [0, 0]]       [0, 0]]       [0, 0]]

I first wish to flatten A.flatten() = [2 1 1 2] Then get the sum of these elements multiplied with B, C, D and E respectively. So:

A[0] * B + A[1]*C + A[2]*D + A[3]*E

Now consider a more general case:

A[0] * X_1 + A[1] * X_2 + ... + A[n-1] * X_n

Where X_n can have any dimension. This is the code I have come up with to do this:

import numpy as np
from functools import reduce
from operator import mul

def product(iterable):
    return reduce(mul, iterable)


def create_table(old_shape, new_shape):
    # Create X_1, X_2, ..., X_n
    lookup = []
    for _ in range(product(old_shape)):
        lookup.append(np.random.rand(*new_shape))
    return lookup


def sum_expansion(arr, lookup, shape):
    # A[0] * X_1 + ... + A[n-1] * X_n
    new_arr = np.zeros(shape)
    for i, a in enumerate(arr.flatten()):
        new_arr += a * lookup[i]

    return new_arr

if __name__ == '__main__':
    lookup = create_table((2, 2), (3, 3, 3))
    # Generate random 2 x 2 matrices.
    randos = (np.random.rand(2, 2) for _ in range(100000))
    results = map(lambda x: sum_expansion(x, lookup, (3, 3, 3)), randos)
    print(list(results))

To execute this code takes about 74 seconds on my machine. Are there any ways to reduce the time this code takes?

Upvotes: 4

Views: 329

Answers (3)

Crispin
Crispin

Reputation: 2120

This is relatively straightforward using an einstein summation on properly reshaped arrays:

import numpy as np


def do_sum(x, mat_lst):
    a = np.array(x).flatten().reshape(1, -1)
    print('A shape: ', a.shape)
    b = np.stack(mat_lst)
    print('B shape: ', b.shape)
    return np.einsum('ij,jkl->kl', a, b)

A = [[1,2],[3,4]]
B = [[[1,1],[1,1]],[[2,2],[2,2]],[[3,3],[3,3]],[[4,4],[4,4]]]

do_sum(A,B)

Outputs

A shape:  (1, 4)
B shape:  (4, 2, 2)

[[30 30]
 [30 30]]

Edit - For generalized case

This is generalized for a list of n-d input arrays. The only prerequisite is that the number of elements in x should equal the length of mat_lst.

def do_sum(x, mat_lst):
    a = np.array(x).flatten()
    b = np.stack(mat_lst)
    print("A shape: {}\nB shape: {}".format(a.shape, b.shape))
    return np.einsum('i,i...', a, b)

A = [[1,2],[3,4]]
B = [np.random.rand(2,2,2) for _ in range(4)]
do_sum(A,B)

(Note: There was no reason to reshape the flattened array, as I did previously, except to aid in understanding how einstein summations work (in my opinion, it's easier to visualize a (1x3) matrix than a (3,) matrix.) So, I've removed that here.)

The Einstein convention implies summation over indices that are repeated for each operand. In our generalized case of two matrices having the shape a.shape = (n,) and b.shape = (n,...), we wish to sum over the first dimension of a and b only. We don't care about the depth of the other dimensions in b, or how many there may be, so we use ... as a catch-all for the remainaing dimensions. The summation dimension(s) disappear from the output array, so the output array contains all other dimensions (i.e. ...).

The subscript string passed to einsum captures all this information. On the input side of the string (everything to the left of ->) we label the indices for each operand (i.e. the input matrices a and b), separated by commas. Indices to sum over are repeated (i.e. i). On the output side of the string (to the right of ->) we indicate output indices. Our function doesn't need an output string because we want to output all the dimensions not included in the summation (I think).

Upvotes: 2

Divakar
Divakar

Reputation: 221724

For such sum-reductions for multi-dimensional arrays, I think we could suggest np.tensordot after reshaping randos to merge the last two axes into one, like so -

np.tensordot(np.array(randos).reshape(-1,4),lookup, axes=((-1),(0)))

Here's another one with reshaping the second array instead for using again with np.tensordot -

lookup_arr = np.asarray(lookup).reshape(2,2,3,3,3)
out = np.tensordot(randos,lookup_arr,axes=((-2,-1),(0,1)))

Runtime test -

In [69]: randos = [np.random.rand(2, 2) for _ in range(100)]

In [73]: lookup = create_table((2, 2), (3, 3, 3))

In [74]: lookup_arr = np.asarray(lookup).reshape(2,2,3,3,3)

In [75]: out1 = np.tensordot(np.array(randos).reshape(-1,4),lookup, axes=((-1),(0)))
    ...: out2 = np.tensordot(randos,lookup_arr,axes=((-2,-1),(0,1)))
    ...: 

In [76]: np.allclose(out1, out2)
Out[76]: True

In [77]: %timeit np.tensordot(np.array(randos).reshape(-1,4),\
                                      lookup, axes=((-1),(0)))
10000 loops, best of 3: 37 µs per loop

In [78]: %timeit np.tensordot(randos,lookup_arr,axes=((-2,-1),(0,1)))
10000 loops, best of 3: 33.3 µs per loop

In [79]: %timeit np.asarray(lookup).reshape(2,2,3,3,3)
100000 loops, best of 3: 2.18 µs per loop

Upvotes: 1

hpaulj
hpaulj

Reputation: 231698

In [20]: randos = [np.random.rand(2, 2) for _ in range(10)]

In [21]: timeit [sum_expansion(x,lookup,(3,3,3)) for x in randos]                                                       10000 loops, best of 3: 184 µs per loop  

Off hand that time doesn't look bad. Each call to sum_expansion takes 18 µs.

In [22]: timeit create_table((2,2),(3,3,3))                                                                             
100000 loops, best of 3: 14.1 µs per loop      

It'll take more time to understand just what you are doing. I'm seeing a lot of Python iteration, and little numpy coding.


I get a 3x improvement using einsum to do the multiplication and sum:

def ein_expansion(arr, lookup, shape):                                                                                      
    return np.einsum('ij,ij...',arr, lookup) 

In [45]: L = np.array(lookup).reshape(2,2,3,3,3)

In [43]: timeit [ein_expansion(r, L,(3,3,3)) for r in randos]                                                           
10000 loops, best of 3: 58.3 µs per loop  

We could get further improvements by operating on multiple randos arrays at once.

 In [59]: timeit np.einsum('oij,ij...->o...',np.array(randos),L)                                                         
 100000 loops, best of 3: 15.8 µs per loop   

 In [60]: np.einsum('oij,ij...->o...',np.array(randos),L).shape                                                           
 Out[60]: (10, 3, 3, 3)  

Upvotes: 2

Related Questions