Alex I
Alex I

Reputation: 20287

Solve linear equations starting from approximate solution

In a problem I'm working on, there is a need to solve Ax=b where A is a n x n square matrix (typically n = a few thousand), and b and x are vectors of size n. The trick is, it is necessary to do this many (billions) of times, where A and b change only very slightly in between successive calculations.

Is there a way to reuse an existing approximate solution for x (or perhaps inverse of A) from the previous calculation instead of solving the equations from scratch?

I'd also be interested in a way to get x to within some (defined) accuracy (eg error in any element of x < maxerr, or norm(error in x) < maxerr), rather than an exact solution (again, reusing the previous calculations).

Upvotes: 4

Views: 212

Answers (3)

SargeATM
SargeATM

Reputation: 2841

In general, a solution might not exist for

Given that Ax = b and A + ε = Ὰ, the rank of doesn't necessarily equal the rank of A.

If you use the Frobenius norm of ε to denote the distance between A and , I'm not sure that a bound exists on the rank distance between A and in terms of the norm of ε

For instance if a bound did exist and you could say:

rank(A - Ὰ) < f(‖ε‖) < 1

Then you would know that a solution exists as long as you choose ε so that f(‖ε‖) < 1

Upvotes: 1

David Eisenstat
David Eisenstat

Reputation: 65458

If the individual problems can be solved by the conjugate gradient method (i.e., A is symmetric, positive-definite, and real), then I would suggest

  • Before the first time you solve for x, and at intervals of Θ(n) times thereafter, compute the Cholesky decomposition of A.
  • Use the Cholesky decomposition for the preconditioned conjugate gradient method.

The idea is: with a dense matrix, the asymptotic cost of Θ(1) steps of preconditioned conjugate gradient is Θ(n²), and the asymptotic cost of Cholesky is Θ(n³), so every Θ(n) solutions for x costs Θ(n³). If we used the exact A that we are trying to solve for as the preconditioner, then preconditioned conjugate gradient would converge in one step! The hope is that, however A is changing, it does so slowly enough that the preconditioning is effective enough for constant iterations to suffice.

To reach a defined accuracy (by some metric), you can look at the residual from preconditioned conjugate gradient. (Then I’d recompute the Cholesky decomposition every Θ(1) iterations of preconditioned conjugate gradient.)

Upvotes: 3

Axel Kemper
Axel Kemper

Reputation: 11322

You could use the Sherman–Morrison formula to incrementally update the inverse of matrix A.

To speed up the matrix multiplications, you could use a suitable matrix multiplication algorithm or a library tuned for high-performance computing. The classic matrix multiplication has complexity O(n³). Strassen-type algorithms have O(n^2.8) and better.

A similiar question without real answer was asked here.

Upvotes: 1

Related Questions