Cowboy
Cowboy

Reputation: 41

fast matrix multiplication in Matlab

I need to make a matrix/vector multiplication in Matlab of very large sizes: "A" is an 655360 by 5 real-valued matrix that are not necessarily sparse and "B" is a 655360 by 1 real-valued vector. My question is how to compute: B'*A efficiently.

I have notice a slight time improvement by computing A'*B instead, which gives a column vector. But still it is quite slow (I need to perform this operation several times in the program).

With a little bit search I found an interesting Matlab toolbox MTIMESX by James Tursa, which I hoped would improve the above matrix multiplication performance. After several trials, I can only have very marginal gains over the Matlab native matrix multiplication.

Any suggestions about how should I rewrite A'*B so that the operation is more efficient? Thanks.

Upvotes: 4

Views: 8112

Answers (5)

Iterator
Iterator

Reputation: 20560

Matlab is built using fairly optimized libraries (BLAS, etc.), so you can't easily improve upon it from within Matlab. Where you can improve is to get a better BLAS, such as one optimized for your processor - this will enable better use of the caches by getting appropriately sized blocks of data from main memory. Take a look into creating your own compiled versions of ATLAS, ACML, MKL, and Goto BLAS.

I wouldn't try to solve this one particular multiplication unless it's really killing you. Changing up the BLAS is likely to lead to a happier solution, especially if you're not currently making use of multicore processors.

Upvotes: 1

Marc
Marc

Reputation: 3313

Your #1 option, if this is your bottleneck, is to re-examine your algorithm. See this question Optimizing MATLAB code for a great example of how choosing a different algorithm reduced runtime by three orders of magnitude.

Upvotes: 0

Maurits
Maurits

Reputation: 2114

I have had good results with matlab matrix multiplication using the GPU

Upvotes: 3

Nzbuu
Nzbuu

Reputation: 5251

In order to avoid the transpose operation, you could try:

sum(bsxfun(@times, A, B), 2)

But I would be astonished it was faster than the direct version. See @thiton's answer.

Also look at http://www.mathworks.co.uk/company/newsletters/news_notes/june07/patterns.html to see why the column-vector-based version is faster than the row-vector-based version.

Upvotes: 1

thiton
thiton

Reputation: 36049

Matlab's raison d'etre is doing matrix computations. I would be fairly surprised if you could significantly outperform its built-in matrix multiplication with hand-crafted tools. First of all, you should make sure your multiplication can actually be performed significantly faster. You could do this by implementing a similar multiplication in C++ with Eigen.

Upvotes: 10

Related Questions