Pusheen_the_dev
Pusheen_the_dev

Reputation: 2197

Normalization produce bad result for mlp

I tried to normalize my data with Sklearn and MinMaxScaler method (0.1, 1) But the result is very disapointed. Without MinMax normalization, I was at 78% of accuracy on my problem, and with min max normalization, it fall to 71%. Do you know what could be the problem ?

My data shape is : [n_samples][1D_vector_of_values]

Here is how I use sklearn for normalize:

scaler = MinMaxScaler(feature_range=(0.1,10))
X = np.array(X)
X_test = np.array(X_test)
X = scaler.fit_transform(X)
X_test = scaler.fit_transform(X_test)

Thanks for help!

Upvotes: 0

Views: 746

Answers (1)

Feras
Feras

Reputation: 843

it is not always the case normalization will give you good result as without since you are loosing some data by applying this discrimination method. it all depends on the nature of the data.

I'd try to implement standardization instead of range normalization but be careful to use the same same standardization batch for testing and validation. you didn't give us more information about your data but I'd still ask you to implement feature selection after the normalization/standardization.

Upvotes: 1

Related Questions