David Nemeskey
David Nemeskey

Reputation: 640

Tensorflow: train and test in separate functions

I am trying to use a Tensorflow model in two separate functions: one that trains it, and one used to test it. For example, the training function looks something like this:

graph = tf.Graph()
with graph.as_default():
    tf_dataset = tf.placeholder(tf.float32, shape=(None, num_dims))
    ...
    weights = tf.Variable(tf.truncated_normal([num_dims, num_labels]))
    ...
    optimizer =    tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
    prediction = tf.nn.softmax(logits)
    ...
    session = tf.Session(graph=graph)
    ...

The other, evaluation function would just use prediction with the test data, like so:

session.run(prediction, feed_dict={tf_dataset: test_data})

The problem is, of course, that tf_dataset is not in the scope of the other function. I am fine with returning session and prediction from the training function, but having to share every single placeholder with the evaluation code seems a bit lame.

Is there a way to get the references somehow, from the session or the graph? Also, are there any good practices on how to separate training and evaluation code in Tensorflow?

Upvotes: 1

Views: 1362

Answers (1)

Yaroslav Bulatov
Yaroslav Bulatov

Reputation: 57903

You could give your placeholders unique names and use that. IE,

tf_dataset = tf.placeholder(tf.float32, shape=(None, num_dims), name="datainput")
...
sess.run(..., feed_dict={"datainput:0": mydata})

You can also get names/type pairs for all ops in your graph, so you could recover all the placeholder tensor names that way

[(op.name+":0", op.op_def.name) for op in graph.get_operations()]

Upvotes: 2

Related Questions