Healer77Om
Healer77Om

Reputation: 325

Why does normalizing MNIST images reduce accuracy?

I am using a basic NN to train and test accuracy on MNIST dataset.

System: i5 8th Gen , GPU - Nvidia 1050Ti

Here is my code:

from __future__ import print_function,absolute_import,unicode_literals,division
import tensorflow as tf

mnist = tf.keras.datasets.mnist

(x_train,y_train) , (x_test,y_test) = mnist.load_data()
#x_train , y_train = x_train/255.0 , y_train/255.0

model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28,28)),
tf.keras.layers.Dense(312,activation='relu'),
tf.keras.layers.Dropout(0.3),
tf.keras.layers.Dense(10,activation='softmax')
])

model.compile(
optimizer='Adamax',
loss="sparse_categorical_crossentropy",
metrics=['accuracy']
)
model.fit(x_train,y_train,epochs=5)
model.evaluate(x_test,y_test)

When i normalize the images as in the 5th line, the accuracy drops horribly :

loss: 10392.0626 - accuracy: 0.0980

However when i dont normalize them, It gives :

- loss: 0.2409 - accuracy: 0.9420

In general , normalizing the data helps the grad descent to converge faster. Why is this huge difference? What am i missing?

Upvotes: 0

Views: 3735

Answers (2)

Uri Cohen
Uri Cohen

Reputation: 3608

You need to do the same normalization on the training-set and the test-set. If you use a pre-trained network you should do the same normalization as done by the trainer; here you are missing normalizing the test-set.

x_train = x_train/255.0
x_test = x_test/255.0 

You're welcome.

Upvotes: 0

amalik2205
amalik2205

Reputation: 4162

Use this:

(x_train, y_train) , (x_test,y_test) = mnist.load_data()
x_train , x_test = x_train/255.0 , x_test/255.0

You are dividing your labels by 255, so you are not normalizing properly.

Upvotes: 3

Related Questions