piliv
piliv

Reputation: 53

Fastest way to compute large number of 3x3 dot product

I have to compute a large number of 3x3 linear transformations (eg. rotations). This is what I have so far:

import numpy as np
from scipy import sparse
from numba import jit

n = 100000 # number of transformations
k = 100 # number of vectors for each transformation

A = np.random.rand(n, 3, k) # vectors
Op = np.random.rand(n, 3, 3) # operators
sOp = sparse.bsr_matrix((Op, np.arange(n), np.arange(n+1))) # same as Op but as block-diag

def dot1():
    """ naive approach: many times np.dot """
    return np.stack([np.dot(o, a) for o, a in zip(Op, A)])

@jit(nopython=True)
def dot2():
    """ same as above, but jitted """
    new = np.empty_like(A)
    for i in range(Op.shape[0]):
        new[i] = np.dot(Op[i], A[i])
    return new

def dot3():
    """ using einsum """
    return np.einsum("ijk,ikl->ijl", Op, A)

def dot4():
    """ using sparse block diag matrix """
    return sOp.dot(A.reshape(3 * n, -1)).reshape(n, 3, -1)

On a macbook pro 2012, this gives me:

In [62]: %timeit dot1()
783 ms ± 20.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

In [63]: %timeit dot2()
261 ms ± 1.93 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

In [64]: %timeit dot3()
293 ms ± 2.89 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

In [65]: %timeit dot4()
281 ms ± 6.15 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

Appart from the naive approach, all approaches are similar. Is there a way to accelerate this significantly?

Edit

(The cuda approach is the best when available. The following is comparing the non-cuda versions)

Following the various suggestions, I modified dot2, added the Op@A method, and a version based on #59356461.

@njit(fastmath=True, parallel=True)
def dot2(Op, A):
    """ same as above, but jitted """
    new = np.empty_like(A)
    for i in prange(Op.shape[0]):
        new[i] = np.dot(Op[i], A[i])
    return new

def dot5(Op, A):
    """ using matmul """
    return Op@A

@njit(fastmath=True, parallel=True)
def dot6(Op, A):
    """ another numba.jit with parallel (based on #59356461) """
    new = np.empty_like(A)
    for i_n in prange(A.shape[0]):
        for i_k in range(A.shape[2]):
            for i_x in range(3):
                acc = 0.0j
                for i_y in range(3):
                    acc += Op[i_n, i_x, i_y] * A[i_n, i_y, i_k]
                new[i_n, i_x, i_k] = acc
    return new


This is what I get (on a different machine) with benchit:

def gen(n, k):
    Op = np.random.rand(n, 3, 3) + 1j * np.random.rand(n, 3, 3)
    A = np.random.rand(n, 3, k) + 1j * np.random.rand(n, 3, k)
    return Op, A

# benchit
import benchit
funcs = [dot1, dot2, dot3, dot4, dot5, dot6]
inputs = {n: gen(n, 100) for n in [100,1000,10000,100000,1000000]}

t = benchit.timings(funcs, inputs, multivar=True, input_name='Number of operators')
t.plot(logy=True, logx=True)

enter image description here

Upvotes: 4

Views: 571

Answers (3)

Richard
Richard

Reputation: 3394

In one answer Nick mentioned using the GPU - which is the best solution of course.

But - as a general rule - what you're doing is likely CPU limited. Therefore (with the exception to the GPU approach), the best bang you can get is if you make use of all the cores on your machine to work in parallel.

So for that you would want to use multiprocessing (not python's multithreading!), to split the job up into pieces running on each core in parallel.

This is not trivial, but also not too hard, and there are many good examples/guides online.

But if you had an 8-core machine, it would likely give you an almost 8x speed increase as long as you're careful to avoid memory bottlenecks by trying to pass many small objects between processes, but pass them all in a group at the start

Upvotes: 1

Nick Becker
Nick Becker

Reputation: 4214

You've gotten some great suggestions, but I wanted to add one more due to this specific goal:

Is there a way to accelerate this significantly?

Realistically, if you need these operations to be significantly faster (which often means > 10x) you probably would want to use a GPU for the matrix multiplication. As a quick example:

import numpy as np
import cupy as cp

n = 100000 # number of transformations
k = 100 # number of vectors for each transformation

# CPU version
A = np.random.rand(n, 3, k) # vectors
Op = np.random.rand(n, 3, 3) # operators

def dot5(): # the suggested, best CPU approach
    return Op@A


# GPU version using a V100
gA = cp.asarray(A)
gOp = cp.asarray(Op)

# run once to ignore JIT overhead before benchmarking
gOp@gA;

%timeit dot5()
%timeit gOp@gA; cp.cuda.Device().synchronize() # need to sync for a fair benchmark
112 ms ± 546 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
1.19 ms ± 1.34 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

Upvotes: 4

Ehsan
Ehsan

Reputation: 12397

Use Op@A like suggested by @hpaulj in comments.

Here is a comparison using benchit:

def dot1(A,Op):
    """ naive approach: many times np.dot """
    return np.stack([np.dot(o, a) for o, a in zip(Op, A)])

@jit(nopython=True)
def dot2(A,Op):
    """ same as above, but jitted """
    new = np.empty_like(A)
    for i in range(Op.shape[0]):
        new[i] = np.dot(Op[i], A[i])
    return new

def dot3(A,Op):
    """ using einsum """
    return np.einsum("ijk,ikl->ijl", Op, A)

def dot4(A,Op):
    n = A.shape[0]
    sOp = sparse.bsr_matrix((Op, np.arange(n), np.arange(n+1))) # same as Op but as block-diag
    """ using sparse block diag matrix """
    return sOp.dot(A.reshape(3 * n, -1)).reshape(n, 3, -1)

def dot5(A,Op):
  return Op@A

in_ = {n:[np.random.rand(n, 3, k), np.random.rand(n, 3, 3)] for n in [100,1000,10000,100000,1000000]}

They seem to be close in performance for larger scale with dot5 being slightly faster.

enter image description here

Upvotes: 4

Related Questions