WorldSEnder
WorldSEnder

Reputation: 5044

Compute matrix power with integer entries

Let's say I have a constant matrix A and I want to compute pow(A, n). As described in this question I can calculate its eigenvalue decomposition (or more generally, its invariant subspaces and the generalized modal matrix) to speed up the process.

If A is a square matrix of size k, then the algorithm has complexity O(k log n) via exponentiation by squaring, and a preparation cost (to compute the modal matrix) of O(k^3).

The problem I am thinking about is loss of precision. Calculating eigenvalues et al takes us out of the domain of integers into floating point numbers. Even though in the end, we know that pow(A, n) has to have all integer entries, the algorithm outlined above only computes floating point numbers.

Another way is to exploit only exponentiation by squaring but that gives us only a O(k^3 log n) algorithm.

Is there a way to accurately - without converting to floating point numbers - compute pow(A, n) fast?

Upvotes: 3

Views: 694

Answers (2)

WorldSEnder
WorldSEnder

Reputation: 5044

Using the Cayley-Hamilton theorem we can be faster. The theorem states that every matrix power for dimension k can be written as a sum of the first k powers of A.

If we know that, we can use exponentiation by squaring but instead of working on matrices we work on polynomials over A with coefficients in ℤ. We can then, after each step reduce the polynomial by the characteristic polynomial.

As a small example:

A = [[1, 1],
     [1, 0]]
A^2 = A + 1 = writing poly. coefficients = {1, 1}

pow(A, 15) = {1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}
           = {1, 0} * ({1, 0} * ({1, 0} * {1, 0}^2)^2)^2
           = {1, 0} * ({1, 0} * ({1, 0} * {1, 0, 0})^2)^2
           = {1, 0} * ({1, 0} * ({1, 0} * {1, 1})^2)^2
           = {1, 0} * ({1, 0} * ({1, 1, 0})^2)^2
           = {1, 0} * ({1, 0} * {2, 1}^2)^2
           = {1, 0} * ({1, 0} * {4, 4, 1})^2
           = {1, 0} * ({1, 0} * {8, 5})^2
           = {1, 0} * ({8, 5, 0})^2
           = {1, 0} * {13, 8}^2
           = {1, 0} * {169, 208, 64}
           = {1, 0} * {377, 233}
           = {377, 233, 0}
           = {610, 377}
           = [[987, 610],
              [610, 377]]

So, what is the runtime cost? Trivially O(k^2 * log n) because at each squaring step we need to compute the square of two polynomials and reduce by the char. polynomial. Using a similar trick as @harold in the other answer yields O(k log k log n) by using the discrete fourier polynomial multiplication as we can find primitive roots.

Upvotes: 0

user555045
user555045

Reputation: 64904

Eigenvalue decomposition is also possible for a matrix over a finite field, but only if the field is just right. So it not just takes preprocessing to do the eigenvalue decomposition, but also to find (some) finite field(s) over which that is even possible.

Finding multiple solutions is useful to avoid having work with gigantic finite fields, then compute pow(A, n) in some small fields and use the CRT to work out what the solution would have been in ℤ. But this requires somehow having a sufficient number of fields of sufficient size to work with and you wouldn't really know in advance what will be sufficient (there is always some n above which it stops working), so maybe this all won't work in practice.

As a small example, take:

A = [[1, 1],
     [1, 0]]

Characteristic x² - x - 1, let's guess that modulo 1009 will work (it does), then there are roots 383 and 627, so:

A = QDP mod 1009
Q = [[  1,   1],
     [382, 626]]
D = [[383,   0],
     [  0, 627]]
P = [[ 77, 153],
     [933, 856]]

So for example

pow(A, 15) = Q [[928,   0], P = [[987, 610],
                [  0, 436]]      [610, 377]]

Fibonacci numbers as expected, so it all worked out. But with just 1009 as the modulus, going above 15 for the exponent makes the result not match would it would be in ℤ, then we would need more/bigger fields.

Upvotes: 1

Related Questions