Reputation: 21
The bottleneck of some code I have is:
for _ in range(n):
W = np.dot(A, W)
where n can vary, A is a fixed size MxM matrix, W is Mx1.
Is there a good way to optimize this?
Upvotes: 0
Views: 136
Reputation: 3583
Since np.dot
is just a matrix multiplication for your shapes you can write what you want as A^n*W
. With ^ being repeated matrix multiplication "matrix_power" and * matrix multiplication. So you can rewrite your code as
np.linalg.matrix_power(A,n)@W
You can do even better with linear algebra. Assuming for the moment W
was an eigenvector of A
i.e. that A*W=a*W
with a just a number then it follows A^n*W=a^n*W.
And now you might think ok but what if W
is not an eigenvector. Since matrix multiplication is linear it is just as good if W
can be written as a linear combination of eigenvectors and there is even a generalisation of this idea in case W
can not be written as a linear combination of eigenvectors. If you want to read more about this google diagonalization and Jordan normal form.
Upvotes: 3