Sibbs Gambling
Sibbs Gambling

Reputation: 20355

Python: Vectorizing Matrix Multiplications in the Loops?

I have an N-by-M array, at each entry of whom, I need to do some NumPy operations and put the result there.

Right now, I'm doing it the naive way with a double loop:

import numpy as np

N = 10
M = 11
K = 100

result = np.zeros((N, M))

is_relevant = np.random.rand(N, M, K) > 0.5
weight = np.random.rand(3, 3, K)
values1 = np.random.rand(3, 3, K)
values2 = np.random.rand(3, 3, K)

for i in range(N):
    for j in range(M):
        selector = is_relevant[i, j, :]
        result[i, j] = np.sum(
            np.multiply(
                np.multiply(
                    values1[..., selector],
                    values2[..., selector]
                ), weight[..., selector]
            )
        )

Since all the in-loop operations are simply NumPy operations, I think there must be a way to do this faster or loop-free.

Upvotes: 2

Views: 116

Answers (2)

Divakar
Divakar

Reputation: 221584

We can use a combination of np.einsum and np.tensordot -

a = np.einsum('ijk,ijk,ijk->k',values1,values2,weight)
out = np.tensordot(a,is_relevant,axes=(0,2))

Alternatively, with one einsum call -

np.einsum('ijk,ijk,ijk,lmk->lm',values1,values2,weight,is_relevant)

And with np.dot and einsum -

is_relevant.dot(np.einsum('ijk,ijk,ijk->k',values1,values2,weight))

Also, play around with the optimize flag in np.einsum by setting it as True to use BLAS.

Timings -

In [146]: %%timeit
     ...: a = np.einsum('ijk,ijk,ijk->k',values1,values2,weight)
     ...: out = np.tensordot(a,is_relevant,axes=(0,2))
10000 loops, best of 3: 121 µs per loop

In [147]: %timeit np.einsum('ijk,ijk,ijk,lmk->lm',values1,values2,weight,is_relevant)
1000 loops, best of 3: 851 µs per loop

In [148]: %timeit np.einsum('ijk,ijk,ijk,lmk->lm',values1,values2,weight,is_relevant,optimize=True)
1000 loops, best of 3: 347 µs per loop

In [156]: %timeit is_relevant.dot(np.einsum('ijk,ijk,ijk->k',values1,values2,weight))
10000 loops, best of 3: 58.6 µs per loop

Very large arrays

For very large arrays, we can leverage numexpr to make use of multi-cores -

import numexpr as ne

a = np.einsum('ijk,ijk,ijk->k',values1,values2,weight)
out = np.empty((N, M))
for i in range(N):
    for j in range(M):
        out[i,j] = ne.evaluate('sum(is_relevant_ij*a)',{'is_relevant_ij':is_relevant[i,j], 'a':a})

Upvotes: 3

javidcf
javidcf

Reputation: 59711

Another very simple option is just:

result = (values1 * values2 * weight * is_relevant[:, :, np.newaxis, np.newaxis]).sum((2, 3, 4))

Divakar's last solution is faster than this though. Timings for comparison:

%timeit np.tensordot(np.einsum('ijk,ijk,ijk->k',values1,values2,weight),is_relevant,axes=(0,2))
# 30.9 µs ± 1.71 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
%timeit np.einsum('ijk,ijk,ijk,lmk->lm',values1,values2,weight,is_relevant)
# 379 µs ± 486 ns per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%timeit np.einsum('ijk,ijk,ijk,lmk->lm',values1,values2,weight,is_relevant,optimize=True)
# 145 µs ± 1.89 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
%timeit is_relevant.dot(np.einsum('ijk,ijk,ijk->k',values1,values2,weight))
# 15 µs ± 124 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit (values1 * values2 * weight * is_relevant[:, :, np.newaxis, np.newaxis]).sum((2, 3, 4))
# 152 µs ± 1.4 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)

Upvotes: 2

Related Questions