Reputation: 2742
What's the point of having more than one tf.Graph
?
I'm thinking specifically about hyperparameter tuning of machine learning models, where a model is either a graph on its own, or several models are defined as disconnected components within the same graph.
I understand that having more than one tf.Session
is bad because task scheduling cannot be done right, so I assume it's possible to have multiple tf.Graph
objects in one session (though tf.Session(graph=...)
begs to differ) but what would be the point of doing that instead of having several components with something like tf.variable_scope
instead? Is it mostly a matter of what gets saved with tf.train.Saver
, visualized in TensorBoard, and so on? Which method is preferable? Should models share a graph or each have their own for hyperparameter tuning?
It seems simpler to just use tf.reset_default_graph(); sess = tf.InteractiveSession()
and to forget about both tf.Graph
and tf.Session
throughout the rest of the code base. What am I missing?
Upvotes: 1
Views: 96
Reputation: 57893
If you have a single session, then there's no point in having multiple graphs. Session is linked to a graph, so if you try to run an element from another graph, you'll get xyz is not an element of this graph
error.
It makes sense to have multiple graphs when you have multiple sessions. For instance, suppose you are using distributed TensorFlow, but also want to do some computations locally. You could do something like this
local_session = tf.Session("", graph=local_graph)
remote_session = tf.Session("grpc://...", graph=remote_graph)
You could potentially use two sessions with the same tf.Graph
object however, however any additions to this object will result in a TF_ExtendGraph
call on the next session.run
even if it's not necessary for that session. In other words, sharing the graph means sending <=2GB graph description to all sessions when the graph is modified.
Upvotes: 3