Lei_Bai
Lei_Bai

Reputation: 88

How to create two graphs for train and validation?

When I read tensorflow guidance about graph and session(Graphs and Sessions), I found they suggest to create two graphs for train and validation.

enter image description here

I think this reasonable and I want to use this because my train and validation models are different (for encoder-decoder mode or dropout). However, i don't know how to make variables in trained graph available for test graph without using tf.saver().
When I create two graphs and create variables inside each graph, I found these two variables are totally different as they belong to different graphs. I have googled a lot and I know there are questions about this problems, such as question1. But there is still no useful answer. If there is any code example or anyone know how to create two graphs for train and validation separately, such as:

def train_model():
    g_train = tf.graph()
    with g_train.as_default():
        train_models
def validation_model():
    g_test = tf.graph()
    with g_test.as_default():
         test_models

Upvotes: 3

Views: 374

Answers (1)

Olivier Dehaene
Olivier Dehaene

Reputation: 1680

One easy way of doing that is to create a 'forward function' that defines the model and change behaviour based on extra parameters.

Here is an example:

def forward_pass(x, is_training, reuse=tf.AUTO_REUSE, name='model_forward_pass'):

  # Note the reuse attribute as it tells the getter to either create the graph or get the weights

  with tf.variable_scope(name=name, reuse=reuse):
     x = tf.layers.conv(x, ...)
     ...
     x = tf.layers.dense(x, ...)
     x = tf.layers.dropout(x, rate, training=is_training) # Note the is_training attribute

     ...
     return x

Now you can call the 'forward_pass' function anywhere in your code. You simply need to provide the is_training attribute to use the correct mode for dropout for example. The 'reuse' argument will automatically get the correct values for your weights as long as the 'name' of the 'variable_scope' is the same.

For example:

train_logits_model1 = forward_pass(x_train, is_training=True, name='model1')
# Graph is defined and dropout is used in training mode

test_logits_model1 = forward_pass(x_test, is_training=False, name='model1')
# Graph is reused but the dropout behaviour change to inference mode

train_logits_model2 = forward_pass(x_train2, is_training=True, name='model2')
# Name changed, model2 is added to the graph and dropout is used in training mode

To add to this answer as you stated that you want to have 2 separated graph, you could to that using an assign function:

train_graph = forward_pass(x, is_training=True, reuse=False, name='train_graph')

...
test_graph = forward_pass(x, is_training=False, reuse=False, name='test_graph')

...
train_vars = tf.get_collection('variables', 'train_graph/.*')
test_vars = tf.get_collection('variables','test_graph/.*')
test_assign_ops = []
for test, train in zip(test_vars, train_vars):
    test_assign_ops += [tf.assign(test, train)]
assign_op = tf.group(*test_assign_ops)

sess.run(assign_op) # Replace vars in the test_graph by the one in train_graph

I'm a big advocate of method 1 as it is way cleaner and reduce memory usage.

Upvotes: 2

Related Questions