atbug
atbug

Reputation: 838

performance in linear algebra with python

Benchmarks of different languages and related questions are everywhere on the Internet. However, I still cannot figure out an answer of whether I should switch to C in my program.

Basically, The most time consuming part in my program involves a lot of matrix inverse and matrix multiplication. I have several plans:

  1. stick with numpy.
  2. use C with LAPACK/BLAS.
  3. rewrite my python program and change the most time consuming part into C and then use python to call C.

I know numpy is just something wrapped around LAPACK/BLAS. So will 2 or 3 be substantially(500%) faster than 1?

Upvotes: 1

Views: 1451

Answers (1)

hsinghal
hsinghal

Reputation: 152

I just wanted to ask a very similar question when i saw yours. I have tested this question from various directions. From quite some time I am trying to beat numpy.dot function by my code.

I have large complex matrices and their multiplication is the primary bottleneck of my program. I have tested following methods

  1. simple c code.
  2. cython code with various optimizations, using cblas.
  3. python 32 bit and 64 bit versions and found that 64 bit version is 1.5-2 times faster than the 32 bit.
  4. ananconda's MKL implementation but no luck there also.
  5. einsum for the matrix multiplication
  6. python 3 and python 2.7 are same python 3 @ operator is also same
  7. numpy.dot(a,b,c) is marginally faster than c=numpy.dot(a,b)

by far the numpy.dot is the best. It beat every other method, sometimes marginally (einsum) but mostly significantly.

During my research i come across one article namely Ultrafast matrix multiplication which tells that apple's altivec implementation can multiply 2500x2500 matrix in less than a second. On my PC with intel core i3 4th generation 2.3 GHZ 4 gb ram it took 73 seconds using numpy.dot hence I am still searching for faster implementation on PC.

Upvotes: 1

Related Questions