user155322
user155322

Reputation: 767

tensorflow graph execution order

consider the code:

#tensorflow graph
input  = tf.some_place_holder
func1  = tf.some_function .....(input)  
func2  = tf.some_function .....(func1)  
....

#code 1 
res1, res2 = sess.run([ func1, func2 ],feed_dict_input)

#code 2 
res1 = sess.run([ func1 ],feed_dict_input)  
res2 = sess.run([ func2 ],feed_dict_input)  

If I run code 2, will the func1 to run twice? i.e. func1 is first run to get res1 and run again get res2.

Is tensorflow smart enough to figure the dependency func1 and func2 so that the functions are called the minium time?

Upvotes: 1

Views: 1236

Answers (1)

mrry
mrry

Reputation: 126154

For the sake of concreteness, I'm assuming that func1 and func2 in your example are tf.Tensor objects.

  • In "code 1", the value of func1 will be computed once: the same value will be returned to the user and used to compute func2.

  • In "code 2", the value of func1 will be computed twice: once in each call to sess.run().

TensorFlow does not cache intermediate tensor values between calls to tf.Session.run(). The reason for this is simple: in a typical neural network workload (training or inference), most intermediate values become invalid between runs of the graph, because they are a function of the input (which changes from step to step) and the current state (which changes during training). If you want to save a value for later use, you must explicitly assign it to a tf.Variable, or store it in some other stateful object, such as a tf.FIFOQueue.

Upvotes: 5

Related Questions