user1827975
user1827975

Reputation: 427

How can I do multidimensional matrix multiplication in numpy, scipy

I have a scipy sparse matrix with shape (8,9) and another array with shape (9, 12 , 17). I want to multiply these such that I get a matrix/array of size (8,12,17) where the (8,9) matrix has effectively multiplied the first dimension only. Do I have to use Kronecker products to do this or is there a simple way in numpy?

Upvotes: 0

Views: 256

Answers (4)

hpaulj
hpaulj

Reputation: 231325

If m1 is the 2d sparse matrix, m1.A is its dense array form. The dinmensions practically write the einsum expression.

np.einsum('ij,jkl->ikl', m1.A, m2)

for example:

In [506]: M = sparse.random(8, 9, 0.1)
In [507]: A = np.ones((9, 12, 17))
In [508]: np.einsum('ij,jkl->ikl', M.A, A).shape
Out[508]: (8, 12, 17)

Upvotes: 2

Alicia Garcia-Raboso
Alicia Garcia-Raboso

Reputation: 13913

@Divakar recommended np.tensordot, and @hpaulj and @Praveen suggested np.einsum. Yet another way is transposing axes:

(a @ b.transpose((2, 0, 1))).transpose((1, 2, 0))

For the small dimensions that you quote, np.einsum and transposition seem to be faster. But once you start scaling up the dimension of the axis along which you are multiplying, np.tensordot beats the other two.

import numpy as np

m, n, k, l = 8, 9, 12, 17
a = np.random.random((m, n))
b = np.random.random((n, k, l))

%timeit np.tensordot(a, b, axes=([1], [0]))
# => 10000 loops, best of 3: 22 µs per loop
%timeit np.einsum("ij,jkl->ikl", a, b)
# => 100000 loops, best of 3: 10.1 µs per loop
%timeit (a @ b.transpose((2, 0, 1))).transpose((1, 2, 0))
# => 100000 loops, best of 3: 11.1 µs per loop

m, n, k, l = 8, 900, 12, 17
a = np.random.random((m, n))
b = np.random.random((n, k, l))

%timeit np.tensordot(a, b, axes=([1], [0]))
# => 1000 loops, best of 3: 198 µs per loop
%timeit np.einsum("ij,jkl->ikl", a, b)
# => 1000 loops, best of 3: 868 µs per loop
%timeit (a @ b.transpose((2, 0, 1))).transpose((1, 2, 0))
# => 1000 loops, best of 3: 907 µs per loop

m, n, k, l = 8, 90000, 12, 17
a = np.random.random((m, n))
b = np.random.random((n, k, l))

%timeit np.tensordot(a, b, axes=([1], [0]))
# => 10 loops, best of 3: 21.7 ms per loop
%timeit np.einsum("ij,jkl->ikl", a, b)
# => 10 loops, best of 3: 164 ms per loop
%timeit (a @ b.transpose((2, 0, 1))).transpose((1, 2, 0))
# => 10 loops, best of 3: 166 ms per loop

Upvotes: 1

Praveen
Praveen

Reputation: 7222

As hpaulj suggests in the comments, the easiest way to do this is np.einsum with a dense matrix:

>>> a = np.random.randn(8, 9)
>>> b = np.random.randn(9, 12, 17)
>>> c = np.einsum('ij,jkl->ikl', a, b)
>>> c.shape
(8, 12, 17)

Upvotes: 1

user1827975
user1827975

Reputation: 427

Here are a couple of ways I can get it to work. The second seems to be better and is about 12 times faster when I tested.

def multiply_3D_dim_zero_slow(matrix, array):
shape = array.shape
final_shape = (matrix.shape[0], array.shape[1], array.shape[2])
result = np.zeros(final_shape)
for i in xrange(shape[1]):
    for j in xrange(shape[2]):
        result[:, i, j] = matrix * array[:, i, j]
return result.reshape(final_shape)

And here is a faster version that uses reshape to make the multidimensional array into a 2D array.

def multiply_3D_dim_zero(matrix, array):
shape = array.shape
final_shape = (matrix.shape[0], array.shape[1], array.shape[2])
array_reshaped = array.reshape(shape[0], shape[1] * shape[2])
return (matrix * array_reshaped).reshape(final_shape)ode here

This just works on the first dimension which is what I need but one could generalize.

Upvotes: 0

Related Questions