John. Tang
John. Tang

Reputation: 59

why blas is slower than numpy

Thanks for Mats Petersson's help. The running time of his C++ does look properly finally! But I have new two questions.

  1. Why Mats Petersson's code is twice times faster than my code ?

Mats Petersson's C++ code is:

#include <iostream>
#include <openblas/cblas.h>
#include <array>
#include <iterator>
#include <random>
#include <ctime>
using namespace std;
const blasint m = 100, k = 100, n = 100;
// Mats Petersson's declaration
array<array<double, k>, m> AA[500]; 
array<array<double, n>, k> BB[500]; 
array<array<double, n>, m> CC[500]; 
// My declaration
array<array<double, k>, m> AA1; 
array<array<double, n>, k> BB1; 
array<array<double, n>, m> CC1; 

int main(void) {
    CBLAS_ORDER Order = CblasRowMajor;
    CBLAS_TRANSPOSE TransA = CblasNoTrans, TransB = CblasNoTrans;

    const float alpha = 1;
    const float beta = 0;
    const int lda = k;
    const int ldb = n;
    const int ldc = n;
    default_random_engine r_engine(time(0));
    uniform_real_distribution<double> uniform(0, 1);

    double dur = 0;
    clock_t start,end;
    double total = 0;
    // Mats Petersson's initialization and computation
    for(int i = 0; i < 500; i++) {
        for (array<array<double, k>, m>::iterator iter = AA[i].begin(); iter != AA[i].end(); ++iter) {
            for (double &number : (*iter))
                number = uniform(r_engine);
        }
        for (array<array<double, n>, k>::iterator iter = BB[i].begin(); iter != BB[i].end(); ++iter) {
            for (double &number : (*iter))
                number = uniform(r_engine);
        }
    }
    start = clock();
    for(int i = 0; i < 500; ++i){
        cblas_dgemm(Order, TransA, TransB, m, n, k, alpha, &AA[i][0][0], lda, &BB[i][0][0], ldb, beta, &CC[i][0][0], ldc);
    }
    end = clock();
    dur += (double)(end - start);
    cout<<endl<<"Mats Petersson spends "<<(dur/CLOCKS_PER_SEC)<<" seconds to compute it"<<endl<<endl;

    // It turns me!  
    dur = 0;
    for(int i = 0; i < 500; i++){
        for(array<array<double, k>, m>::iterator iter = AA1.begin(); iter != AA1.end(); ++iter){
            for(double& number : (*iter))
                number = uniform(r_engine);
        }
        for(array<array<double, n>, k>::iterator iter = BB1.begin(); iter != BB1.end(); ++iter){
            for(double& number : (*iter))
                number = uniform(r_engine);
        }
        start = clock();
        cblas_dgemm(Order, TransA, TransB, m, n, k, alpha, &AA1[0][0], lda, &BB1[0][0], ldb, beta, &CC1[0][0], ldc);
        end = clock();
        dur += (double)(end - start);
    }

    cout<<endl<<"I spend "<<(dur/CLOCKS_PER_SEC)<<" seconds to compute it"<<endl<<endl;  
}

Here is the result:

Mats Petersson spends 0.215056 seconds to compute it

I spend 0.459066 seconds to compute it

So, why his code is twice times faster than my code ?

  1. Python is still faster?

the numpy code is

import numpy as np
import time
a = {}
b = {}
c = {}
for i in range(500):
    a[i] = np.matrix(np.random.rand(100, 100))
    b[i] = np.matrix(np.random.rand(100, 100))
    c[i] = np.matrix(np.random.rand(100, 100))
start = time.time()
for i in range(500):
    c[i] = a[i]*b[i]
print(time.time() - start)

the result is: enter image description here

Still can not understand it!

Upvotes: 4

Views: 569

Answers (1)

Mats Petersson
Mats Petersson

Reputation: 129374

So, I can't reproduce the original results, however, with this code:

#include <iostream>
#include <openblas/cblas.h>
#include <array>
#include <iterator>
#include <random>
#include <ctime>
using namespace std;

const blasint m = 100, k = 100, n = 100;
array<array<double, k>, m> AA[500];
array<array<double, n>, k> BB[500];
array<array<double, n>, m> CC[500];

int main(void) {
    CBLAS_ORDER Order = CblasRowMajor;
    CBLAS_TRANSPOSE TransA = CblasNoTrans, TransB = CblasNoTrans;


    const float alpha = 1;
    const float beta = 0;
    const int lda = k; 
    const int ldb = n; 
    const int ldc = n; 
    default_random_engine r_engine(time(0));
    uniform_real_distribution<double> uniform(0, 1);

    double dur = 0;
    clock_t start,end;
    double total = 0;

    for(int i = 0; i < 500; i++){
        for(array<array<double, k>, m>::iterator iter = AA[i].begin(); iter != AA[i].end(); ++iter){
            for(double& number : (*iter))
                number = uniform(r_engine);
        }
        for(array<array<double, n>, k>::iterator iter = BB[i].begin(); iter != BB[i].end(); ++iter){
            for(double& number : (*iter))
                number = uniform(r_engine);
        }
    }

    start = clock();
    for(int i = 0; i < 500; i++)
    {
        cblas_dgemm(Order, TransA, TransB, m, n, k, alpha, &AA[i][0][0], lda, &BB[i][0][0], ldb, beta, 
            &CC[i][0][0], ldc);
    total += CC[i][i/5][i/5];
    }
    end = clock();
    dur = (double)(end - start);

    cout<<endl<<"It spends "<<(dur/CLOCKS_PER_SEC)<<" seconds to compute it"<<endl<<endl;
    cout << "total =" << total << endl;
}

and this code:

import numpy as np
import time
a = {}
b = {}
c = {}
for i in range(500):
    a[i] = np.matrix(np.random.rand(100, 100))
    b[i] = np.matrix(np.random.rand(100, 100))
    c[i] = np.matrix(np.random.rand(100, 100))
start = time.time()
for i in range(500):
    c[i] = a[i]*b[i]
print(time.time() - start)

we know that the loops do (nearly) the same thing. My results are these:

  • python 2.7: 0.676353931427
  • python 3.4: 0.6782681941986084
  • clang++ -O2: 0.117377
  • g++ -O2: 0.117685

Making the arrays global ensures that we don#t blow up the stack. I also changed rengine1 to rengine, since it wouldn't compile as it was.

I then made sure both examples calculate 500 different array values.

Interestingly, the total time for g++ is much shorter than the total time for clang++ - but that's the loop outside the time measurement, the actual matrix multiplication is the same, give or take a thousandth of a second. Total execution time for python is somewhere between clang and g++.

Upvotes: 2

Related Questions