Sergii
Sergii

Reputation: 611

tensorflow: shared variables error with simple LSTM network

I am trying to build a simplest possible LSTM network. Just want it to predict the next value in the sequence np_input_data.

import tensorflow as tf
from tensorflow.python.ops import rnn_cell
import numpy as np

num_steps = 3
num_units = 1
np_input_data = [np.array([[1.],[2.]]), np.array([[2.],[3.]]), np.array([[3.],[4.]])]

batch_size = 2

graph = tf.Graph()

with graph.as_default():
    tf_inputs = [tf.placeholder(tf.float32, [batch_size, 1]) for _ in range(num_steps)]

    lstm = rnn_cell.BasicLSTMCell(num_units)
    initial_state = state = tf.zeros([batch_size, lstm.state_size])
    loss = 0

    for i in range(num_steps-1):
        output, state = lstm(tf_inputs[i], state)
        loss += tf.reduce_mean(tf.square(output - tf_inputs[i+1]))

with tf.Session(graph=graph) as session:
    tf.initialize_all_variables().run()

    feed_dict={tf_inputs[i]: np_input_data[i] for i in range(len(np_input_data))}

    loss = session.run(loss, feed_dict=feed_dict)

    print(loss)

The interpreter returns:

ValueError: Variable BasicLSTMCell/Linear/Matrix already exists, disallowed. Did you mean to set reuse=True in VarScope? Originally defined at:
    output, state = lstm(tf_inputs[i], state)

What do I do wrong?

Upvotes: 3

Views: 6206

Answers (3)

Engineero
Engineero

Reputation: 12908

I ran into a similar issue in TensorFlow v1.0.1 using tf.nn.dynamic_rnn. It turned out that the error only arose if I had to re-train or cancel in the middle of training and restart my training process. Basically the graph was not being reset.

Long story short, throw a tf.reset_default_graph() at the start of your code and it should help. At least when using tf.nn.dynamic_rnn and retraining.

Upvotes: 5

Eugene Brevdo
Eugene Brevdo

Reputation: 899

Use tf.nn.rnn or tf.nn.dynamic_rnn which do this, and a lot of other nice things, for you.

Upvotes: 1

Vince Gatto
Vince Gatto

Reputation: 415

The call to lstm here:

for i in range(num_steps-1):
  output, state = lstm(tf_inputs[i], state)

will try to create variables with the same name each iteration unless you tell it otherwise. You can do this using tf.variable_scope

with tf.variable_scope("myrnn") as scope:
  for i in range(num_steps-1):
    if i > 0:
      scope.reuse_variables()
    output, state = lstm(tf_inputs[i], state)     

The first iteration creates the variables that represent your LSTM parameters and every subsequent iteration (after the call to reuse_variables) will just look them up in the scope by name.

Upvotes: 5

Related Questions