antonpuz
antonpuz

Reputation: 3316

Effect of setting sequence_length on the returned state in dynamic_rnn

Suppose I have an LSTM network to classify timeseries on length 10, the standard way to feed the timeseries to the LSTM is to form a [batch size X 10 X vector size] array and feed it to the LSTM:

self.rnn_t, self.new_state = tf.nn.dynamic_rnn( \
        inputs=self.X, cell=self.lstm_cell, dtype=tf.float32, initial_state=self.state_in)

When using the sequence_length parameter I can specify the length of the timeseries.

My question, for the scenario defined above, if I call dynamic_rnn 10 time with a vector of size [batch size X 1 X vector size], taking the matching index in the timeseries and passing the returned state as the initial_state of the preceding call, would I end up having the same results? outputs and state? or not?

Upvotes: 1

Views: 266

Answers (1)

Vijay Mariappan
Vijay Mariappan

Reputation: 17201

You should be getting the same output in both the cases. I'll illustrate this with a toy example below:

> 1. Setting up the inputs and the parameters of the network:

# Set RNN params
batch_size = 2
time_steps = 10
vector_size = 5

# Create a random input
dataset= tf.random_normal((batch_size, time_steps, vector_size), dtype=tf.float32, seed=42)

# input tensor to the RNN
X = tf.Variable(dataset, dtype=tf.float32)

> 2. Time series LSTM with input: [batch_size, time_steps, vector_size]

# Initializers cannot be set to random value, so set it a fixed value.
with tf.variable_scope('rnn_full', initializer=tf.initializers.ones()):
   basic_cell= tf.contrib.rnn.BasicRNNCell(num_units=10)
   output_f, state_f= tf.nn.dynamic_rnn(basic_cell, X, dtype=tf.float32)

> 3. LSTM called in a loop count of time_steps to create tim_series, where each LSTM is fed an input: [batch_size, vector_size] and the returned state is set as the initial state

# Unstack the inputs across time_steps    
unstack_X = tf.unstack(X,axis=1)

outputs = []
with tf.variable_scope('rnn_unstacked', initializer=tf.initializers.ones()):
   basic_cell= tf.contrib.rnn.BasicRNNCell(num_units=10)

   #init_state has to be set to zero
   init_state = basic_cell.zero_state(batch_size, dtype=tf.float32)

   # Create a loop of N LSTM cells, N = time_steps.
   for i in range(len(unstack_X)):
      output, state= tf.nn.dynamic_rnn(basic_cell, tf.expand_dims(unstack_X[i], 1), dtype=tf.float32, initial_state= init_state)
      # copy the init_state with the new state
      init_state = state
      outputs.append(output)
   # Transform the output to [batch_size, time_steps, vector_size]        
   output_r = tf.transpose(tf.squeeze(tf.stack(outputs)), [1, 0, 2])

> 4. Checking the outputs

with tf.Session() as sess:
   sess.run(tf.global_variables_initializer())
   out_f, st_f =sess.run([output_f, state_f])
   out_r, st_r =sess.run([output_r, state])

   npt.assert_almost_equal(out_f, out_r)
   npt.assert_almost_equal(st_f, st_r)

Both the states and the outputs matches.

Upvotes: 1

Related Questions