Reputation: 12404
I'm currently building my operations twice, once for training and once for validation with the variable_scope
set to have reuse=True
to ensure I've only got one set of weights to train.
To organize the operations though, I wrap the operation building call for training in a
with tf.name_scope='train':
and similarly do the same for validation. This allows me to create a few summary hooks easily, by simply calling
tf.summary.merge(tf.get_collection(tf.GraphKeys.SUMMARIES, scope='train'))
at the end to get summaries for either the training graph or the validation graph and save these summaries with the appropriate summary saver.
Unfortunately, this also means that a scalar in the training summaries is not displayed on the same plot as the equivalent scalar in the validation (because they are in different name scopes).
Is there either a way to remove the name scope before saving the summary, or a different method of wrapping the summaries for a specific case together without applying the name scope to begin with? Or do I need to manually keep track of the summaries for each case?
EDIT:
Just clarify, my code looks something like:
with tf.name_scope('train'):
create_network() # Summaries create in here.
with tf.name_scope('validation'):
create_network(reuse=True) # More summaries in here.
train_summaries = tf.summary.merge(tf.get_collection(tf.GraphKeys.SUMMARIES, scope='train'))
validation_summaries = tf.summary.merge(tf.get_collection(tf.GraphKeys.SUMMARIES, scope='validation'))
# Down here, create the summary saver hooks, etc.
Upvotes: 2
Views: 1062
Reputation: 34288
Something like this is done in the multi-GPU CIFRA-10 example code to get rid of unnecessary prefixes:
loss_name = re.sub('%s_[0-9]*/' % cifar10.TOWER_NAME, '', l.op.name)
tf.summary.scalar(loss_name, l)
Perhaps you can report the scalar with the same name from both validation
as well as the training
part of your code.
Upvotes: 1