Sephka
Sephka

Reputation: 13

Faster Matrix-Vector-Multiplication (MVM) if the matrix elements are computed on-the-fly

I am currently working on a Project where I have to calculate the extremal Eigenvalues using the Lanczos-Algorithm. I replaced the MVM so that the Matrix-Elements are calculated on-the-fly, because I will have to calculate the Eigenvalues of real huge matrices. This slows down my Code because for-loops are slower in python thand MVM. Is there any way to improve on my code in an easy way? I tried using Cython but I had no real luck here.

for i in range(0,dim):
        for j in range(0,dim):
            temp=get_Matrix_Element(i,j)
            ws[i]=ws[i]+temp*v[j]

This replaces:

ws = M.dot(v)

Update The ,atrix M is sparse and can be stored for "small" systems in a sparse-matrix-format using scipy.sparse. For large systems with up to ~10^9 dimensions I need to calculate the matrix elements on-the-fly

Upvotes: 1

Views: 261

Answers (1)

Anton Menshov
Anton Menshov

Reputation: 2326

The easiest and fast to implement solution would be to go half-way: precompute one row at a time.

In your original solution (M.dot(v)) you had to store dim x dim which grows quadratically. If you precompute one row, it scales linearly and should not cause you the troubles (since you are already storing a resultant vector ws of the same size).

The code should be along the lines of:

for i in range(0,dim):
    temp=get_Matrix_Row(i)
        ws[i]=temp.dot(v)

where temp is now an dim x 1 vector.

Such modification should allow more optimizations to happen during the dot product without major code modifications.

Upvotes: 1

Related Questions