varantir
varantir

Reputation: 6854

julia multiplication of two arrays

Is there a way to speed-up/ write more elegantly this array multiplication (which, in numpy arrays, I would write as A*B)?

A = rand(8,15,10)
B = rand(10,5)
C = zeros(8,15,5)
for i in 1:8
    for j in 1:15
        for k in 1:10
            for l in 1:5
                C[i,j,l] = A[i,j,:]⋅B[:,l]
            end
        end
    end
end

Upvotes: 2

Views: 191

Answers (1)

carstenbauer
carstenbauer

Reputation: 10117

There are a bunch of Julia packages which allow you to write your contraction in one simple line. Here a few examples based on Einsum.jl, OMEinsum.jl, and TensorOperations.jl:

using OMEinsum
f_omeinsum(A,B) = ein"ijk,km->ijm"(A,B)

using Einsum
f_einsum(A,B) = @einsum C[i,j,l] := A[i,j,k] * B[k,l]

using TensorOperations
f_tensor(A,B) = @tensor C[i,j,l] := A[i,j,k] * B[k,l]

Apart from these elegant (and fast, see below) versions, you can improve your loop code quite a bit. Here your code, wrapped into a function, and an improved version with comments:

function f(A,B)
    C = zeros(8,15,5)
    for i in 1:8
        for j in 1:15
            for k in 1:10
                for l in 1:5
                    C[i,j,l] = A[i,j,:]⋅B[:,l]
                end
            end
        end
    end
    return C
end

function f_fast(A,B)
    # check bounds
    n1,n2,n3 = size(A)
    m1, m2 = size(B)
    @assert m1 == n3
    C = zeros(n1,n2,m2)

    # * @inbounds to skip boundchecks inside the loop
    # * different order of the loops to account for Julia's column major order
    # * written out the k-loop (dot product) explicitly to avoid temporary allocations
    @inbounds for l in 1:m2
                for k in 1:m1
                        for j in 1:n2
                            for i in 1:n1
                            C[i,j,l] += A[i,j,k]*B[k,l]
                        end
                    end
                end
            end
    return C
end

Let's compare all approaches. First we check for correctness:

using Test
@test f(A,B) ≈ f_omeinsum(A,B) # Test passed
@test f(A,B) ≈ f_einsum(A,B) # Test passed
@test f(A,B) ≈ f_tensor(A,B) # Test passed
@test f(A,B) ≈ f_fast(A,B) # Test passed

Now, let's benchmark using BenchmarkTools.jl. I put the timings on my machine as comments.

using BenchmarkTools
@btime f($A,$B); # 663.500 μs (12001 allocations: 1.84 MiB)
@btime f_omeinsum($A,$B); # 33.799 μs (242 allocations: 20.20 KiB)
@btime f_einsum($A,$B); # 4.200 μs (1 allocation: 4.81 KiB)
@btime f_tensor($A,$B); # 2.367 μs (3 allocations: 4.94 KiB)
@btime f_fast($A,$B); # 7.375 μs (1 allocation: 4.81 KiB)

As we can see, all the einsum/tensor notation based approaches are much faster than your original loop implementation - and only one liners! The performance of our f_fast is in the same ballpark but still quite a bit behind f_tensor, which is the fastest.

Finally, let's go all for performance, because we can. Utilizing the wizardry from LoopVectorization.jl, we replace the @inbounds in f_fast with @avx (we call this new version f_avx below) and automagically get another 2x speed up relative to the f_tensor performance above:

@test f(A,B) ≈ f_avx(A,B) # Test passed
@btime f_avx($A,$B); # 930.769 ns (1 allocation: 4.81 KiB)

However, because of its simplicity I'd still prefer f_tensor unless every microsecond counts in your application.

Upvotes: 3

Related Questions