Reputation: 445
During my acquaintance with CUDA in Python (numba lib), I implemented matrix provide methods:
numpy.dot()
numpy.dot()
So I tested it on 2 types of data:
numpy.random.randint(0, 5, (N, N)) # with int32 elements
numpy.random.random((N, N)) # with float64 elements
For int32 i obtained expected result, where my GPU algroithms performed better than CPU with numpy:
However, on float64 type, numpy.dot()
outperformed all my GPU methods:
So, question is:
Why is numpy.dot()
so fast with float64
arrays, and does numpy use the GPU?
Upvotes: 5
Views: 1758
Reputation: 74154
A typical installation of numpy will be dynamically linked against a BLAS library, which provides routines for matrix-matrix and matrix-vector multiplication. For example, when you use np.dot()
on a pair of float64 arrays, numpy will call the BLAS dgemm
routine in the background. Although these library functions run on the CPU rather than the GPU, they are often multithreaded, and are very finely tuned for performance. A good BLAS implementation, such as MKL or OpenBLAS, will probably be hard to beat in terms of performance, even on the GPU*.
However, BLAS only supports floating point types. If you call np.dot()
on integer arrays, numpy will fall back on using a very simple internal C++ implementation, which is single-threaded and much slower than a BLAS dot on two floating point arrays.
Without knowing more about how you conducted those benchmarks, I would bet that a plain call to numpy.dot
would also comfortably beat your other 3 methods for float32, complex64 and complex128 arrays, which are the other 3 types supported by BLAS.
* One possible way to beat standard BLAS would be to use cuBLAS, which is a BLAS implementation that will run on an NVIDIA GPU. The scikit-cuda
library seems to provide Python bindings for it, although I've never used it myself.
Upvotes: 6
Reputation: 2233
I understand that numpy will automatically use multiple cpu processors where it has the libraries compiled. For some functions (and I think dot() was one of the ones, though I can't find ref now). I suspect this is what's happening. I'm not aware of any attempts to get a numpy gpu back end http://www.reddit.com/r/Python/comments/1mw9mb/is_there_a_gpu_backend_for_numpyscipy_money_is_no/
Upvotes: 0