PlsWork
PlsWork

Reputation: 2168

How to scale each column of a matrix

This is how I scale a single vector:

vector = np.array([-4, -3, -2, -1, 0])

# pass the vector, current range of values, the desired range, and it returns the scaled vector
scaledVector = np.interp(vector, (vector.min(), vector.max()), (-1, +1)) # results in [-1.  -0.5  0.   0.5  1. ]

How can I apply the above approach to each column of a given matrix?

matrix = np.array(
      [[-4, -4, 0, 0, 0],
      [-3, -3, 1, -15, 0],
      [-2, -2, 8, -1, 0],
      [-1, -1, 11, 12, 0],
      [0, 0, 50, 69, 80]])

scaledMatrix = [insert code that scales each column of the matrix]

Note that the first two columns of the scaledMatrix should be equal to the scaledVector from the first example. For the matrix above, the correctly computed scaledMatrix is:

[[-1.         -1.         -1.         -0.64285714 -1.        ]
 [-0.5        -0.5        -0.96       -1.         -1.        ]
 [ 0.          0.         -0.68       -0.66666667 -1.        ]
 [ 0.5         0.5        -0.56       -0.35714286 -1.        ]
 [ 1.          1.          1.          1.          1.        ]]

My current approach (wrong):

np.interp(matrix, (np.min(matrix), np.max(matrix)), (-1, +1))

Upvotes: 4

Views: 3442

Answers (1)

P. Camilleri
P. Camilleri

Reputation: 13228

If you want to do it by hand and understand what's going on:

First substract columnwise mins to make each columns have min 0.

Then divide by columnwise amplitude (max - min) to make each column have max 1.

Now each column is between 0 and 1. If you want it to be between -1 and 1, multiply by 2, and substract 1:

In [3]: mins = np.min(matrix, axis=0)

In [4]: maxs = np.max(matrix, axis=0)

In [5]: (matrix - mins[None, :]) / (maxs[None, :] - mins[None, :])
Out[5]: 
array([[ 0.        ,  0.        ,  0.        ,  0.17857143,  0.        ],
       [ 0.25      ,  0.25      ,  0.02      ,  0.        ,  0.        ],
       [ 0.5       ,  0.5       ,  0.16      ,  0.16666667,  0.        ],
       [ 0.75      ,  0.75      ,  0.22      ,  0.32142857,  0.        ],
       [ 1.        ,  1.        ,  1.        ,  1.        ,  1.        ]])

In [6]: 2 * _ - 1
Out[6]: 
array([[-1.        , -1.        , -1.        , -0.64285714, -1.        ],
       [-0.5       , -0.5       , -0.96      , -1.        , -1.        ],
       [ 0.        ,  0.        , -0.68      , -0.66666667, -1.        ],
       [ 0.5       ,  0.5       , -0.56      , -0.35714286, -1.        ],
       [ 1.        ,  1.        ,  1.        ,  1.        ,  1.        ]])

I use [None, :] for numpy to understand that I'm talking about "row vectors", not column ones.

Otherwise, use the wonderful sklearn package, whose preprocessing module has lots of useful transformers:

In [13]: from sklearn.preprocessing import MinMaxScaler

In [14]: scaler = MinMaxScaler(feature_range=(-1, 1))

In [15]: scaler.fit(matrix)
Out[15]: MinMaxScaler(copy=True, feature_range=(-1, 1))

In [16]: scaler.transform(matrix)
Out[16]: 
array([[-1.        , -1.        , -1.        , -0.64285714, -1.        ],
       [-0.5       , -0.5       , -0.96      , -1.        , -1.        ],
       [ 0.        ,  0.        , -0.68      , -0.66666667, -1.        ],
       [ 0.5       ,  0.5       , -0.56      , -0.35714286, -1.        ],
       [ 1.        ,  1.        ,  1.        ,  1.        ,  1.        ]])

Upvotes: 2

Related Questions