Reputation: 4806
I would like to achieve something like this using tensorflow.
I can only find documentation on saving and restoring variables (weights). However, like #2-2, I want to utilize the output of a hidden layer (tensor) as input of another model. Can this be done?
Upvotes: 0
Views: 2389
Reputation: 9877
As far as I'm aware, it is not possible to chain different computation graphs after they have been created however, you have a few options.
Option 2: Create one large graph and use a control flow op
output_layer, placeholder = build_my_model()
something = tf.where(output_layer < 0, do_something_1(), do_something_2())
where all function calls above should return tensorflow operations.
Option2: Create two graphs and perform the conditional statement inside python
# Build the first graph
with tf.Graph().as_default() as graph:
output_layer, placeholder = build_my_model()
# Build the second two graphs
with tf.Graph().as_default() as graph_1:
something_1 = do_something_1()
with tf.Graph().as_default() as graph_2:
something_2 = do_something_2()
As a result, you will also end up with three different sessions and you need to feed the output from the first session to one of the other two
# Get the output
_output_layer = sess.run(output_layer, {placeholder: ...})
if _output_layer < 0:
something = sess1.run(something_1, {...})
else:
something = sess2.run(something_2, {...})
As you can see, if you can get away with the control flow op, your code will be significantly simpler. Another advantage of having everything in one graph is that the entire graph is differentiable and you can train parameters of the first stage of your model conditional on the loss at a later stage.
Upvotes: 1