Reputation: 139
inp = Input([10], name='Input')
X = Dense(10, activation='relu', kernel_initializer='glorot_uniform')(inp)
X = Dropout(0.5, seed=0)(X)
X = Dense(1, activation='relu', kernel_initializer='glorot_uniform')(X)
X = Dropout(0.5, seed=0)(X)
m = Model(inputs=inp, outputs=X)
u = np.random.rand(1,10)
sess.run(tf.global_variables_initializer())
K.set_learning_phase(0)
print(sess.run(X, {inp: u}))
print(sess.run(X, {inp: u}))
K.set_learning_phase(1)
print(sess.run(X, {inp: u}))
print(sess.run(X, {inp: u}))
print(m.predict(u))
Here is my code. When I run the model, I got the same result for each run. However, should the result change slightly when running the model because of the dropout layers?
Upvotes: 0
Views: 283