Reputation: 1101
I have a simple matrix multiplication code in python (numpy)
import numpy as np
import time
a = np.random.random((70000,3000));
b = np.random.random((3000,100));
t1=time.time()
c = np.dot(a,b);
t2=time.time()
print 'Time passed is %2.2f seconds' %(t2-t1
It needs about 16 seconds to complete the multiplication (c = np.dot(a,b);) on one core. However when I run the same multiplication on Matab, it needs about 1 second on (6 cores) to complete the multiplication.
So, Why Matlab is 2.6 times faster than numpy for matrix multiplication? (The performance per core is important for me)
UPDATE I have tried the same thing this time using Eigen. And its performance is slightly better than Matlab. Eigen uses the same Blas implementation as Numpy uses. So the Blas implementation and not be the source of the drawback in the performance.
To make sure that the installed numpy used BLAS, I np.show_config()
enter code here
blas_info:
libraries = ['blas']
library_dirs = ['/usr/lib64']
language = f77
lapack_info:
libraries = ['lapack']
library_dirs = ['/usr/lib64']
language = f77
atlas_threads_info:
NOT AVAILABLE
blas_opt_info:
libraries = ['blas']
library_dirs = ['/usr/lib64']
language = f77
define_macros = [('NO_ATLAS_INFO', 1)]
atlas_blas_threads_info:
NOT AVAILABLE
lapack_opt_info:
libraries = ['lapack', 'blas']
library_dirs = ['/usr/lib64']
language = f77
define_macros = [('NO_ATLAS_INFO', 1)]
atlas_info:
NOT AVAILABLE
lapack_mkl_info:
NOT AVAILABLE
blas_mkl_info:
NOT AVAILABLE
atlas_blas_info:
NOT AVAILABLE
mkl_info:
NOT AVAILABLE
Upvotes: 4
Views: 1751
Reputation: 12868
Try out the Enthought Python Distribution. For one it is linked to the Intel Math Kernel Library, which is highly optimized and used by MatLab.
Edit: Update for 2017. The Anaconda distribution is really the way to go these days.
Upvotes: 6