Seljuk Gulcan
Seljuk Gulcan

Reputation: 1868

MinMax Scale Sparse Matrix Excluding Zero Elements

I have a matrix which contains numbers between [0, 5]. The matrix is very sparse, most of elements are zero. I want to apply min-max scaling to each row separately such that all elements are between [-1, 1]. However, I want to take only nonzero elements into account. For example, consider the following matrix :

[[0.5 3.  0.  2.  0. ]
 [0.  4.  5.  0.  0. ]
 [3.  0.  0.  2.5 4. ]]

After the transformation, it will look like : (As you can see, 0 elements are untouched)

[[-1.          1.          0.          0.2         0.        ]
 [ 0.         -1.          1.          0.          0.        ]
 [-0.33333333  0.          0.         -1.          1.        ]]

I can do this on normal numpy arrays with the following code:

max_arr = A.max(axis=1)
min_arr = np.where(A == 0, A.max(), A).min(axis=1)
row_idx, col_idx = A.nonzero()
A_scaled = np.zeros_like(A)
for row, col in zip(row_idx, col_idx):
    element = A[row, col]
    A_scaled[row, col] = 2 * ((element - min_arr[row]) / (max_arr[row] - min_arr[row])) - 1

There are couple of issues here. Firstly, it is slow (Because of the for loop maybe?). Other thing is that my matrix is sparse so I want to use sparse csr_matrix format. This code does not work if matrix A is csr_matrix. It gives error on line 2 saying ValueError: setting an array element with a sequence.

How can I achieve this in a fast and memory efficient way? I looked at sklearn.preprocessing.MinMaxScaler but it does not support scaling by excluding zeros.

Upvotes: 3

Views: 1221

Answers (1)

Divakar
Divakar

Reputation: 221564

Here's one vectorized method for csr_matrix matrices -

def scale_sparse_matrix_rows(s, lowval=0, highval=1):
    d = s.data

    lens = s.getnnz(axis=1)
    idx = np.r_[0,lens[:-1].cumsum()]

    maxs = np.maximum.reduceat(d, idx)
    mins = np.minimum.reduceat(d, idx)

    minsr = np.repeat(mins, lens)
    maxsr = np.repeat(maxs, lens)

    D = highval - lowval
    scaled_01_vals = (d - minsr)/(maxsr - minsr)
    d[:] = scaled_01_vals*D + lowval

Sample run -

1) Setup input csr_matrix :

In [153]: a
Out[153]: 
array([[0.5, 3. , 0. , 2. , 0. ],
       [0. , 4. , 5. , 0. , 0. ],
       [3. , 0. , 0. , 2.5, 4. ]])

In [154]: from scipy.sparse import csr_matrix

In [155]: s = csr_matrix(a)

2) Run proposed method and verify results :

In [156]: scale_sparse_matrix_rows(s, lowval=-1, highval=1)

In [157]: s.toarray()
Out[157]: 
array([[-1.        ,  1.        ,  0.        ,  0.2       ,  0.        ],
       [ 0.        , -1.        ,  1.        ,  0.        ,  0.        ],
       [-0.33333333,  0.        ,  0.        , -1.        ,  1.        ]])

Upvotes: 2

Related Questions