Reputation: 872
I'd like to run the same RNN over two tensors in tensorflow. My current solution looks like this:
cell = tf.nn.rnn_cell.GRUCell(cell_size)
with tf.variable_scope("encoder", reuse=None):
out1 = tf.nn.dynamic_rnn(cell, tensor1, dtype=tf.float32)
with tf.variable_scope("encoder", reuse=True):
out2 = tf.nn.dynamic_rnn(cell, tensor2, dtype=tf.float32)
Is this is the best way to ensure that the weights between the two RNN ops are shared?
Upvotes: 1
Views: 143
Reputation: 12175
Yeah that is basically how I would do it. For a really simple model like this it does not matter much but for a more complicated model I would define a function to build the graph.
def makeEncoder(input_tensor):
cell = tf.nn.rnn_cell.GRUCell(cell_size)
return tf.nn.dynamic_rnn(cell, tensor1, dtype=tf.float32)
with tf.variable_scope('encoder') as scope:
out1 = makeEncoder(tensor1)
scope.reuse_variables()
out2 = makeEncoder(tensor2)
The other way to do it would be to use tf.cond(...)
as a switch to change between the inputs based on a boolean placeholder. They would then go to just one output. I have found that this can get a bit messy. Also you would need to provide both inputs even if you really only need one. I think my first solution is the best.
Upvotes: 1