Clock Su
Clock Su

Reputation: 31

Eigen is much slower than Fortran in matrix multiplication using an explicit loop

I tried to rewrite code from Fortran to C++ with a 2000*2000 matrix multiplication implements through Eigen library. I found that for loop in Eigen is much slower (>3x) than do loop in Fortran. The codes are listed below:

test.f90

program main
implicit none
integer :: n,i,j,k
integer :: tic,toc
real(8),ALLOCATABLE ::a(:,:),b(:,:),c(:,:)
real(8) :: s

n = 2000
allocate(a(n,n),b(n,n),c(n,n))
do i=1,n
    do j =1,n
        a(j,i) = i * 1.0
        b(j,i) = i * 1.0
    enddo
enddo

call system_clock(tic)
do j=1,n
    do i=1,n
        s = 0.0
        do k=1,n
            s = s + a(i,k) * b(k,j)
        enddo
        c(i,j) = s
    enddo
enddo
call system_clock(toc)
print*,'Fortran with loop:', (toc - tic) / 1000.0

call system_clock(tic)
c = matmul(a,b)
call system_clock(toc)
print*,'Fortran with matmul:', (toc - tic) / 1000.0


DEALLOCATE(a,b,c)
end

test.cpp

#include<Eigen/Core>
#include<time.h>
#include<iostream>
using Eigen::MatrixXd;

int main(){
    int n = 2000;
    MatrixXd a(n,n),b(n,n),c(n,n);
    for(int i=0;i<n;i++){
    for(int j=0;j<n;j++){
            a(i,j) = i * 1.0;
            b(i,j) = j * 1.0;
        }
    }
    clock_t tic,toc;
    tic = clock();
    for(int j=0;j<n;j++){
        for(int i=0;i<n;i++){
            double s= 0.0;
            for(int k=0;k<n;k++){
                s += a(i,k) * b(k,j);
            }
            c(i,j) = s;
        }
    }
    toc = clock();
    std::cout << (double)((toc - tic)) / CLOCKS_PER_SEC << std::endl;

    tic = clock();
    c=  a * b;
    toc = clock();
    std::cout << (double)((toc - tic)) / CLOCKS_PER_SEC << std::endl;
}

Compiled by(with gcc-8.4, in Ubuntu-18.04)

gfortran test.f90 -O3 -march=native -o testf
g++ test.cpp -O3 -march=native -I/path/to/eigen -o testcpp 

And I get results:

Fortran with loop:   10.9700003
Fortran with matmul:   0.834999979
Eigen with loop: 38.2188
Eigen with *: 0.40625

The internal implementation is of comparable speed, but why Eigen is much slower for the loop implementation?

Upvotes: 3

Views: 1059

Answers (3)

knia
knia

Reputation: 749

The biggest problem with the loops is that they are not done in the proper order for either C++ (which should be row-major), or Fortran (which should be column-major). This gives you a large performance hit, especially for large matrices.

The nativemul implementation by John Alexiou (with dot_product) has the same problem, so I am very surprised that he claims it's faster. (And I find that it isn't; see below. Maybe his (intel?) compiler rewrites the code to use matmul internally.)

This is the correct loop order for Fortran:

    c = 0
    do j=1,n
        do k=1,n
            do i=1,n
                c(i,j) = c(i,j) + a(i,k) * b(k,j)
            enddo
        enddo
    enddo

With gfortran version 10.2.0, and compiled with -O3, I get

 Fortran with original OP's loop:        53.5190010
 Fortran with John Alexiou's nativemul:  53.4309998
 Fortran with correct loop:              11.0679998
 Fortran with matmul:                     2.3699999

A correct loop in C++ should give you similar performance.

Obviously matmul/BLAS are much faster for large matrices.

Upvotes: 3

John Alexiou
John Alexiou

Reputation: 29244

In the Fortran code I saw the same problem, but then I moved the matrix multiplication in a subroutine and the resultant speed was almost as good as matmul. I also compared to BLAS Level 3 function.

Fortran with loop:   9.220000
Fortran with matmul:   8.450000
Fortran with blas3:   2.050000

and the code to produce it

program ConsoleMatMul
use BLAS95
implicit none
integer :: n,i,j
integer :: tic,toc
real(8),ALLOCATABLE :: a(:,:),b(:,:),c(:,:),xe(:,:)

n = 2000
allocate(a(n,n),b(n,n),c(n,n),xe(n,n))
do i=1,n
    do j =1,n
        a(j,i) = i * 1.0
        b(j,i) = i * 1.0
    enddo
enddo

call system_clock(tic)
call nativemul(a,b,c)
call system_clock(toc)
print*,'Fortran with loop:', (toc - tic) / 1000.0

call system_clock(tic)
c = matmul(a,b)
call system_clock(toc)
print*,'Fortran with matmul:', (toc - tic) / 1000.0
c = b
xe = 0d0
call system_clock(tic)
call gemm(a,c,xe) ! BLAS MATRIX/MATRIX MUL
call system_clock(toc)
print*,'Fortran with blas3:', (toc - tic) / 1000.0

DEALLOCATE(a,b,c)

contains

pure subroutine nativemul(a,b,c)
real(8), intent(in), allocatable :: a(:,:), b(:,:)
real(8), intent(out), allocatable :: c(:,:)
real(8) :: s
integer :: n, i,j,k
    n = size(a,1)
    if (.not. allocated(c)) allocate(c(n,n))
    do j=1,n
        do i=1,n
            s = 0.0d0
            do k=1,n
                s = s + a(i,k) * b(k,j)
            end do
            c(i,j) = s
        end do
    end do
end subroutine    

end program ConsoleMatMul

before I moved the code into a subroutine I got

Fortran with loop:   85.450000

Update the native multiplication reaches matmul levels (or exceeds it) when the inner loop is replaced by a dot_product().

pure subroutine nativemul(a,b,c)
real(8), intent(in) :: a(:,:), b(:,:)
real(8), intent(out) :: c(:,:)
integer :: n, i,j
    n = size(a,1)
    do j=1,n
        do i=1,n
            c(i,j) = dot_product(a(i,:),b(:,j))
            ! or  = sum(a(i,:)*b(:,j))
        end do
    end do
end subroutine    

Upvotes: 0

PodHunter
PodHunter

Reputation: 74

C++ pre-increment is faster than post-increment...

for(int j=0;j<n;++j){
        for(int i=0;i<n;++i){
            double s= 0.0;
            for(int k=0;k<n;++k){
                s += a(i,k) * b(k,j);
            }
            c(i,j) = s;
        }
    }

Upvotes: -1

Related Questions