Reputation: 475
I'm having a problem that concern the reproducibility of my results using Tensorflow (v1.15.3). I set all the seeds (os, random, numpy and tensorflow) but the results of a convolutional neural networks changes always between executions (even if similar).
I set my seeds in this way:
seed_value = 1234
import os
os.environ['PYTHONHASHSEED'] = str(seed_value)
import random
random.seed(seed_value)
import numpy as np
np.random.seed(seed_value)
import tensorflow as tf
tf.compat.v1.set_random_seed(seed_value)
tf.set_random_seed(seed_value)
Next I define the weights of the net like this:
weights = {
'conv1/conv2d': tf.get_variable('conv1/weights', shape=[3,3,512,1024], initializer=tf.contrib.layers.xavier_initializer()),
# and more ...
}
(things doesn't change if I use or not the initializer for the weights)
after that I define the graph with convolution operations using the weights initialised before (I omit that because there is no way to set seed to the tf.nn.conv2d operation in Tensorflow and because only the weights are the dynamic part of the model that could affect the results).
Any idea how to get always the same results after defining a model in this way using Tensorflow?
Thank you.
Upvotes: 0
Views: 274
Reputation: 1459
I suggest, after
weights = {
'conv1/conv2d': tf.get_variable('conv1/weights', shape=[3,3,512,1024], initializer=tf.contrib.layers.xavier_initializer()),
# and more ...
}
store the weight externally in a file, for example, then next time you run, do not go through that previous line and load the weight from the external file.
Upvotes: 1