Reputation: 71
I tested Eigen's SparseLU and BicGSTAB method on some sparse matrix, whose dense counterparts' size ranges from 3000*3000 to 16000*16000. All the cases shows that SparseLU is around 13% faster than BicGSTAB method.
I didn't feed the BiCGSTAB a RowMajor sparse matrix, or give it any pre-conditioner. That might be the reason of slow.
So I am wondering, if I do both methods well, which one should be faster?
How about if the matrix size goes up to millions*millions?
Thanks a lot!
Upvotes: 1
Views: 1535
Reputation: 349
Choice of linear solver has a lot to do with the distribution of eigenvalues/eigenvectors of the matrix. If you have a symmetric positive definite matrix then conjugate-gradient is a good option. Number of iterations depend on the condition number (max eigen value/min eigen value). For a matrix derived from an elliptic operator, condition number increases with the size of the matrix.
Check out this article by Jonathan Shewchuk for a great explanation on CG. (https://www.cs.cmu.edu/~quake-papers/painless-conjugate-gradient.pdf).
For other matrix types, you can use GMRES etc. based on the eigen properties. Check out this paper http://www.sam.math.ethz.ch/~mhg/pub/biksm.pdf
Hope this helps.
Upvotes: 0
Reputation: 3612
You already mentioned the main reason of performance difference. The iterative methods are getting much faster when you choose the "right" preconditioner.
An example list of preconditioners you might refer to is:
Each preconditioner has some parameters the should be tuned also.
Upvotes: 0