Reputation: 3688
As I want to use only numpy
and scipy
(I don't want to use scikit-learn
), I was wondering how to perform a L2 normalization of rows in a huge scipy csc_matrix
(2,000,000 x 500,000). The operation must consume as little memory as possible since it must fit in memory.
What I have so far is:
import scipy.sparse as sp
tf_idf_matrix = sp.lil_matrix((n_docs, n_terms), dtype=np.float16)
# ... perform several operations and fill up the matrix
tf_idf_matrix = tf_idf_matrix / l2_norm(tf_idf_matrix)
# l2_norm() is what I want
def l2_norm(sparse_matrix):
pass
Upvotes: 2
Views: 2521
Reputation: 3688
Since I couldn't find the answer anywhere, I will post here how I approached the problem.
def l2_norm(sparse_csc_matrix):
# first, I convert the csc_matrix to csr_matrix which is done in linear time
norm = sparse_csc_matrix.tocsr(copy=True)
# compute the inverse of l2 norm of non-zero elements
norm.data **= 2
norm = norm.sum(axis=1)
n_nzeros = np.where(norm > 0)
norm[n_nzeros] = 1.0 / np.sqrt(norm[n_nzeros])
norm = np.array(norm).T[0]
# modify sparse_csc_matrix in place
sp.sparsetools.csr_scale_rows(sparse_csc_matrix.shape[0],
sparse_csc_matrix.shape[1],
sparse_csc_matrix.indptr,
sparse_csc_matrix.indices,
sparse_csc_matrix.data, norm)
If anyone has a better approach, please post it.
Upvotes: 2