nono
nono

Reputation: 185

Left Multiplication of vector and csr_matrix

I have a csr format sparse matrix A of size 50*10000, and a np.array v of size 50, for which I would like to calculate the product v.dot(A). How do I do this efficiently?

Of course that running v.dot(A) is not such a good idea, because scipy.sparse does not support np operations. Unfortunately, to my knowledge, scipy.sparse does not have a function for left multiplication of a matrix by a vector.

I've tried the following methods, but all seem to be pretty time-costly:

  1. Transpone A and use the standard .dot

I transpose A and then use the .dot method. This multiplies A.T with v as a column vector.

```
>>> A = sparse.csr_matrix([[1, 2, 0, 0],
...                        [0, 0, 3, 4]])
>>> v = np.array([1, 1])
>>> A.T.dot(v)
array([1, 2, 3, 4], dtype=int32)
```
  1. Transposing v and using the multiply and sum methods

I'm using the csr_matrix.multiply() method, which preforms point-wise multiplication. The I'll sum over the rows.

>>> vt = v[np.newaxis].T
>>> A.multiply(vt).sum(axis=0)
matrix([[1, 2, 3, 4]], dtype=int32)
  1. Turning v to a sparse matrix and using the .dot method

I tried different construction methods, all seemed costly. This is the most readable example (probably not the most efficient one):

>>> sparse_v = sparse.csr_matrix(v)
>>> sparse_v.dot(A).todense()
matrix([[1, 2, 3, 4]], dtype=int32)

Method 1 is by far the fastest, but the .T method is still very time-consuming. Is there not a better way to perform left multiplication on sparse matrices?

Upvotes: 2

Views: 1311

Answers (2)

hpaulj
hpaulj

Reputation: 231335

In [746]: A = sparse.csr_matrix([[1, 2, 0, 0], 
     ...: ...                        [0, 0, 3, 4]])                                                    
In [747]: A                                                                                            
Out[747]: 
<2x4 sparse matrix of type '<class 'numpy.longlong'>'
    with 4 stored elements in Compressed Sparse Row format>
In [748]: print(A)                                                                                     
  (0, 0)    1
  (0, 1)    2
  (1, 2)    3
  (1, 3)    4
In [749]: v = np.array([1, 1])                                                                         

A.T returns a new matrix but in csc format. Otherwise the change is minimal. But it isn't a view like the dense transpose.

In [750]: A.T                                                                                          
Out[750]: 
<4x2 sparse matrix of type '<class 'numpy.longlong'>'
    with 4 stored elements in Compressed Sparse Column format>
In [755]: timeit A.T                                                                                   
82.9 µs ± 542 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)

Actually the time is pretty good compared to the time it takes to make the original sparse matrix:

In [759]: timeit A = sparse.csr_matrix([[1, 2, 0, 0],   [0, 0, 3, 4]])                                 
349 µs ± 356 ns per loop (mean ± std. dev. of 7 runs, 1000 loops each)

Sparse is optimized for matrix multiplication of large very sparse matrices. Most other operations are faster with dense arrays (provided they fit in memory).

The default multiplication for sparse is matrix

In [752]: A.T*v                                                                                        
Out[752]: array([1, 2, 3, 4], dtype=int64)
In [753]: A.T.dot(v)                                                                                   
Out[753]: array([1, 2, 3, 4], dtype=int64)
In [754]: A.T@(v)                                                                                      
Out[754]: array([1, 2, 3, 4], dtype=int64)

@, __matmul__ delegates to __mul__. Sparse dot does as well.

* and dot time the same, @ a bit slower. In [758]: timeit A.T*(v)
95.6 µs ± 424 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)

removing the transpose from the time:

In [760]: %%timeit A1=A.T 
     ...: A1*v                                                                                              
7.77 µs ± 22.8 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)

Your 2nd approach use sparse elementwise multiplication:

In [774]: A.multiply(v[:,None]).sum(axis=0)                                                            
Out[774]: matrix([[1, 2, 3, 4]], dtype=int64)

That's not as efficient as matrix multiplication

In [775]: A.multiply(v[:,None])                                                                        
Out[775]: 
<2x4 sparse matrix of type '<class 'numpy.longlong'>'
    with 4 stored elements in COOrdinate format>
In [776]: timeit A.multiply(v[:,None])                                                                 
147 µs ± 1.14 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)

The result is another sparse matrix. sum(axis=0) is actually implemented as a matrix multiplication. The extractor matrix for this sum is sparse.csr_matrix([1,1]). But that's just sparse_v in your last example.

In [787]: A.sum(axis=0)                                                                                
Out[787]: matrix([[1, 2, 3, 4]], dtype=int64)
In [788]: timeit A.sum(axis=0)                                                                         
242 µs ± 191 ns per loop (mean ± std. dev. of 7 runs, 1000 loops each)

All of these timings should be viewed with caution. A is small, and not very sparse. Compare this row sum for a much larger sparse matrix:

In [799]: Ab = sparse.random(1000,1000,.001,'csr')                                                     
In [800]: timeit Ab.sum(axis=0)                                                                        
269 µs ± 12.9 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

In [801]: timeit Ab.T*np.ones(1000)                                                                    
118 µs ± 216 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)

Upvotes: 2

Tobias123
Tobias123

Reputation: 36

The @ operator also works for scipy sparse matrices and a short and rough performance comparison with your methods indicates that the @ operator performs similar to your method 1

Upvotes: 0

Related Questions