Jingles
Jingles

Reputation: 1135

Normalize numpy signal (3 dimension array), by axis=2, between -1 to 1

I have a NumPy array [shape: (100, 11, 1000)], I would like to normalize by axis=2, to values between -1 to 1. Which method should I use to ensure that every signal by batch and channels (axis 0 and 1), where values range between -1 to 1?

Figure 1: The data values could range between -20 to 20, for example plt.plot(data[0,0]):

signal1

Figure 2: To range between -140 to 140, for example plt.plot(data[55,3]):

signal2

Update: MinMaxScaler does not support 3 dimension array: Found array with dim 3. MinMaxScaler expected <= 2.

Update: Applying the following code doesn't produce a similar signal as the input :

X_std = (X - X.min(axis=0)) / (X.max(axis=0) - X.min(axis=0))
norm = X_std * (max - min) + min

Figure 3: the output after applying min-max norm code to input Figure 1:

signal3

Upvotes: 1

Views: 954

Answers (2)

Tom Szwagier
Tom Szwagier

Reputation: 11

Just specify axis=2 in the following formula:

X_std = (X - X.min(axis=2)) / (X.max(axis=2) - X.min(axis=2))
X_scaled = X_std * (max - min) + min

It is absolutely normal that with axis=0 the signal is distorted, because each point will be normalized compared to the other samples values !

Upvotes: 1

GhandiFloss
GhandiFloss

Reputation: 384

You could use the MinMaxScaler from sklearn

from sklearn.preprocessing import MinMaxScaler

# create object with feature range of -1, 1
scaler = MinMaxScaler(feature_range=(-1, 1))

# transform data
transformed_array = scaler(your_data)

If you don't want to use sklearn you could implement it yourself in numpy. Equations above are calculated using the following:

X_std = (X - X.min(axis=0)) / (X.max(axis=0) - X.min(axis=0))
X_scaled = X_std * (max - min) + min

Docs: https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html

Upvotes: 1

Related Questions