user9765822
user9765822

Reputation: 21

Trained Tensorflow give different result at same input value but not same structure

I'm trying to run the trained Tensorflow model. But Trained Model give me different result at same input.

I tried several test about the model

  1. save test input data and run trained model with them in training .py file
  2. restore trained model(different .py file) and run with saved test input

those two case give me same result but next cases give me different result

  1. using 1 or 2 test input data and run trained model with them in training. py file
  2. restore trained model(different .py file) and run with 1 or 2 test input data(same data with 2)

result of 1 and 3 are same and result of 2 and 4 are same

next lines are the code about this problem Nxtest is normalized value of xtest

Training.py

result1 = sess.run(Out, feed_dict={X: NXtest})
result2 = sess.run(Out, feed_dict={X: NXtest[0:2,:]})

Restore.py

result3 = sess.run(Out, feed_dict={X: NXtest})
result4 = sess.run(Out, feed_dict={X: NXtest[0:2,:]})

result1 and result3

[[  1.8736366 ,   2.02535582,  19.39698982],
 [  2.67727947,   0.9930172 ,  16.15852356],
 [  0.90145612,   1.97343755,  14.90865993],
 [  1.78041267,   6.17082882,  18.19297409],
 [  4.76018906,   3.00801134,   9.77826309],...]

result2 and result4

[[5.20546   7.42051 8.2718],
 [4.59359   3.55607 20.086]]

why they give me different result?

Upvotes: 1

Views: 813

Answers (1)

user9765822
user9765822

Reputation: 21

I found what was the problem...

the problem was layer normalization.

I used the code below to training

# Layer 1
HL1 = tf.add(tf.matmul(X, w1), b1)
# Layer Normalize
mean1, var1 = tf.nn.moments(HL1,[0])
HL1_hat = (HL1 - mean1) / tf.sqrt(var1 + epsilon)
scale1 = tf.Variable(tf.ones([n_hidden1]))
beta1 = tf.Variable(tf.zeros([n_hidden1]))
NL1=scale1 * HL1_hat + beta1
# Activation
AL1=tf.nn.relu(NL1)

But, I didn't make tf.variables() about normalization parameter So, in my thought, to calculate normalization parameter, code need a lot of different input data.

when I remove the layer normalization like below, model make same result without regard to how much of the data.

# Layer 1
HL1 = tf.add(tf.matmul(X, w1), b1)
# Activation
AL1=tf.nn.relu(HL1)

Thank you for reading. Have a good day

Upvotes: 1

Related Questions