R.joe
R.joe

Reputation: 49

Why model performs poor after normalization?

I'm using fully connected neural network and I am using normalized data such that every single sample values range from 0 to 1. I have used 100 neurons in first layer and 10 in second layer and used almost 50 lack samples during training. I want to classify my data into two classes. But my networks performance is too low, almost 49 percent on training and test data. I tried to increase the performance by changing the values of hyper parameters. But it didn't work. Can some one please tell me what should I do to get higher performance?

x = tf.placeholder(tf.float32, [None, nPixels])
W1 = tf.Variable(tf.random_normal([nPixels, nNodes1], stddev=0.01))
b1 = tf.Variable(tf.zeros([nNodes1]))
y1 = tf.nn.relu(tf.matmul(x, W1) + b1)

W2 = tf.Variable(tf.random_normal([nNodes1, nNodes2], stddev=0.01))
b2 = tf.Variable(tf.zeros([nNodes2]))
y2 = tf.nn.relu(tf.matmul(y1, W2) + b2)

W3 = tf.Variable(tf.random_normal([nNodes2, nLabels], stddev=0.01))
b3 = tf.Variable(tf.zeros([nLabels]))
y = tf.nn.softmax(tf.matmul(y2, W3) + b3)

y_ = tf.placeholder(dtype=tf.float32, shape=[None, 2])

cross_entropy = -1*tf.reduce_sum(y_* tf.log(y), axis=1)    
loss = tf.reduce_mean(cross_entropy)

optimizer = tf.train.GradientDescentOptimizer(0.01).minimize(loss)
correct_prediction = tf.equal(tf.argmax(y_,axis=1), tf.argmax(y, axis=1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))

Upvotes: 1

Views: 1269

Answers (1)

dedObed
dedObed

Reputation: 1363

Your computational model knows nothing about "images", it only sees numbers. So if you trained it with pixels of values from 0-255, it has learned what "light" means, what "dark" means and how do these combine to give you whatever target value you try model.

And what you did by the normalization is that you forced all pixel to be 0-1. So as far as the model cares, they are all black as night. No surprise that it cannot extract anything meaningful.

You need to apply the same input normalization during both training and testing.

And speaking about normalization for NN models, it is better to normalize to zero mean.

Upvotes: 3

Related Questions