Reputation: 2385
I'm very perplexed by TF variable reuse. For method rnn, I'm able to find this line of code:
if time > 0: vs.get_variable_scope().reuse_variables()
However, for dynamic_rnn
(the method I need to use), I do not find any reuse_variable line of code, or reuse=True.
All the RNN cells in rnn_cells
module are initializing using _linear
method, which does not check if a variable has been created or not, but in LSTM_cell, _get_concat_variable
does nicely check if a variable's name exists in graph_key
or not.
So does dynamic_rnn
not reuse the variable? Should I write a method to explicitly check if a variable is created and return it if it is?
Upvotes: 0
Views: 1011
Reputation: 335
The dynamic_rnn function has a parameter called scope. So you should create your own scope (using with tf.variable_scope('scope_name', reuse=True)
) and set it when calling dynamic_rnn
function.
You can checkout the implementation of model_with_buckets, which reuses the same model for each buckets.
Upvotes: 1