padawan
padawan

Reputation: 1315

Few calculations with huge matrices vs. lots of calculations with small matrices

I am working on a Java project which has thousands of matrix calculations. But the matrices are at most 10x10 matrices.

I wonder if it is better to use a matrix library or use write the simple functions (determinant(), dotproduct() etc.) Because when small matrices are used, it is advised not to use libraries but do the operations by custom functions.

I know that matrix libraries like JAMA provides high performance when it comes to 10000x10000 matrices or so.

Instead making 5-6 calculations with 10000x10000 matrices, I make 100000 calculations with 10x10 matrices. Number of primitive operations are nearly the same.

Are both cases same in terms of performance? Should I treat myself as if I'm working with huge matrices and use a library?

Upvotes: 1

Views: 133

Answers (2)

chippies
chippies

Reputation: 1615

Getting the maximum possible speed (with lots of effort)

For maximum possible speed I would suggest writing a C function that uses vector math intrinsics such as Streaming SIMD Extensions (SSE) or Advanced Vector Extensions (AVX) operations, together with multithreading (e.g. via OpenMP).

Your Java program would pass all 100k matrices to this native function, which would then handle all the calculations. Portability becomes an issue, e.g. AVX instructions are only supported on recent CPUs. Developer effort, especially if you are not familiar with SSE/AVX increases a lot too.

Reasonable speed without too much effort

You should use multiple threads by creating a class that extends java.lang.Thread or implements java.lang.Runnable. Each thread iterates through a subset of the matrices, calling your maths routine(s) for each matrix. This part is key to getting decent speed on multi-core CPUs. The maths could be your own Java function to do the calculations on a single matrix, or you could use a library's functions.

I wonder if it is better to use a matrix library or use write the simple functions (determinant(), dotproduct() etc.) Because when small matrices are used, it is advised not to use libraries but do the operations by custom functions.

...

Are both cases same in terms of performance? Should I treat myself as if I'm working with huge matrices and use a library?

No, using a library and writing your own function for the maths are not the same performance-wise. You may be able to write a faster function that is specialised to your application, but consider this:

  • The library functions should have fewer bugs than code you will write.
  • A good library will use implementations that are efficient (i.e. least amount of operations). Do you have the time to research and implement the most efficient algorithms?

You might find the Apache Commons Math library useful. I would encourage you to benchmark Apache Commons Math and JAMA to choose the fastest.

Upvotes: 2

Peter Lawrey
Peter Lawrey

Reputation: 533680

I suspect for a 10x10 matrix you won't see much difference.

In tests I have done for hand coding a 4x4 matrix the biggest overhead was loading the data into the L1 cache and how you did it didn't matter very much. For a 3x3 matrix and smaller it did appear to make a significant difference.

Upvotes: 3

Related Questions