Reputation: 169
I'm currently trying to compute matrix exponentiation, and for that I use the well known algorithm of exponentiation by squaring.
def mat_mul(a, b):
n = len(a)
c = []
for i in range(n):
c.append([0]*n)
for j in range(n):
for k in range(n) :
c[i][j] += (a[i][k]*b[k][j])
return c
def mat_pow(a, n):
if n<=0:
return None
if n==1:
return a
if n==2:
return mat_mul(a, a)
t1 = mat_pow(a, n//2)
if n%2 == 0:
return mat_mul(t1, t1)
return mat_mul(t1, mat_mul(a, t1))
The problem is that my algorithm is still too slow, and after some research, I found out that it was because contrary to what I thought, the time of matrix multiplication depends on the matrix size and on the numbers in the matrix.
In fact, the numbers in my matrix become very large, so after some time, the multiplication becomes way slower. Typically, I have M, a 13*13 matrix filled with random 1 and 0, and I want to compute M(108). The integers in the matrix can have hundreds of digits. I'd like to know if there is a way to avoid that problem.
I've seen that I could use matrix diagonalization, but the problem is I can't use external libraries (like numpy). So the diagonalization algorithm seems a bit too complicated.
Upvotes: 1
Views: 334
Reputation: 26066
the time of matrix multiplication depends on the matrix size and on the numbers in the matrix.
Well, of course, you are multiplying integers of arbitrary size. CPUs have no support for multiplication of those, so it will be very slow, and become slower as the integers grow.
The integers in the matrix can have hundreds of digits. I'd like to know if there is a way to avoid that problem.
There are several ways:
Avoid integers, use floating point numbers and handle the error however your project needs. This will greatly increase the speed and most importantly it won't depend on the size of the numbers anymore. The memory usage will also greatly decrease too.
Use a better algorithm. You already suggested this one, but this is one of the best ways to increase performance if the better algorithm gives you way better bounds.
Optimize it in a low-level systems language. This can give you some performance back, around an order of magnitude or so. Python is a very bad choice for high-performance computing unless you use specialized libraries that do the work for you like numpy.
Ideally, you should be doing all 3 if you actually need the performance.
Upvotes: 3