Reputation: 1359
I have a complicated use case which I've distilled down to just incrementing a variable in Tensorflow.
a = tf.Variable(1, trainable=False)
b = tf.constant(2)
a = tf.assign_add(a, b)
In [32]: type(a)
Out[32]: tensorflow.python.framework.ops.Tensor
My actual use case is actually generating a new random tensor under certain conditions each time my custom Keras layer is called, but seems like it boils down to a variable turning into a tensor if I do anything to it. Is the correct use case to wrap each a = tf.Variable(tf.assign(a, b))
and have a
change everytime my keras layer is called?
Upvotes: 0
Views: 110
Reputation: 24581
You are overthinking it. tf.assign_add
returns an op that adds to a variable. The fact that it also return the resulting value is for convenience only — the variable is affected.
Example:
import tensorflow as tf
a = tf.Variable(1, trainable=False)
b = tf.constant(2)
c = tf.assign_add(a, b)
sess = tf.InteractiveSession()
tf.global_variables_initializer().run()
print(sess.run(a))
# 1: the original value
print(sess.run(c))
# 3: the result of the addition
print(sess.run(a))
# 3: OK, the variable has indeed been added to
Upvotes: 1