wmaxlees
wmaxlees

Reputation: 605

Reusing Tensorflow Variables

I want to have basically two different graphs based on whether I am training my network or actually running it. Basically, one piece uses some unsupervised techniques to learn the values of a given matrix and then I want to use the exact same matrix in a different graph.

I know how to get the value of the matrix using matrix_value = sess.run(my_matrix, {input=input_data}) but then is there a way to initialize a tf.Variable with a set value?

Upvotes: 0

Views: 431

Answers (2)

pfm
pfm

Reputation: 6328

You don't have to create two identical graphs, you could just use the same but run different nodes.

Let me explain what I mean. Let's look at this example:

W = tf.Variable(tf.truncated_normal([1, 1], stddev=0.1))
# Declare bias variable initialized to a constant 0.1
b = tf.Variable(tf.constant(0.1, shape=[1]))

y_pred = x_ph * W + b

# loss function
loss = tf.mul(tf.reduce_mean(tf.square(tf.sub(y_pred, y_ph))), 1. / 2)

The first part calls the train_op.

train_op = tf.train.GradientDescentOptimizer(0.1).minimize(loss)

This op will run a gradient descent step based on the loss op which will induce an update of the variables W and b.

with tf.Session() as sess:
    # initialize all the variables by running the initializer operator
    sess.run(init)
    for epoch in xrange(num_epoch):
        # Run sequentially the train_op and loss operators with
        # x_ph and y_ph placeholders fed by variables x and y
        _, loss_val = sess.run([train_op, loss], feed_dict={x_ph: x, y_ph: y})
        print('epoch %d: loss is %.4f' % (epoch, loss_val))

But now if you just want to run it you can just run the y_pred op. It will pick up the current values of W and b and they won't be modified as you didn't call the train_op.

with tf.Session() as sess:
    # see what model do in the test set
    # by evaluating the y_pred operator using the x_test data
    test_val = sess.run(y_pred, feed_dict={x_ph: x_test})

When you ask TensorFlow to run an op y_pred with new data x_test fed into x_ph, it will only compute y_pred = x_ph * W + b (taking W and b as constant) without modifying anything else.

Also, it is worth mentioning that when you're done with training you have the ability to override some variable values (e.g. in the case of a variable value learnt to be very close to 1, you could just set it to 1 directly as per the TensorFlow documentation.

We could improve this manually by reassigning the values of W and b to the perfect values of -1 and 1. A variable is initialized to the value provided to tf.Variable but can be changed using operations like tf.assign. For example, W=-1 and b=1 are the optimal parameters for our model. We can change W and b accordingly:

fixW = tf.assign(W, [-1., 1.])
fixb = tf.assign(b, [1.])
sess.run([fixW, fixb])
print(sess.run(loss, {x_ph:x_test}))

Upvotes: 1

Harsha Pokkalla
Harsha Pokkalla

Reputation: 1802

You can try something like this:

import numpy as np
import tensorflow as tf

value = [0, 1, 2, 3, 4, 5, 6, 7]
init = tf.constant_initializer(value)

with tf.Session():
    x = tf.get_variable('x', shape=[2, 4], initializer=init)
    x.initializer.run()
    print(x.eval())

I hope this helps !

Upvotes: 1

Related Questions