user123
user123

Reputation: 5407

ValueError: setting an array element with a sequence while using it with tensorflow

I have checked all similar threads, but couldn't fix my issue.

Actually my code works fine on my local system, but when I execute it on server, it gives this error. Code snippet:

    with tf.variable_scope("lstm") as scope:
        # The RNN cell
        single_cell = rnn_cell.DropoutWrapper(
            rnn_cell.LSTMCell(hidden_size, hidden_size, initializer=tf.random_uniform_initializer(-1.0, 1.0)),
            input_keep_prob=self.dropout_keep_prob_lstm_input,
            output_keep_prob=self.dropout_keep_prob_lstm_output)
        self.cell = rnn_cell.MultiRNNCell([single_cell] * num_layers)
        # Build the recurrence. We do this manually to use truncated backprop
        self.initial_state = tf.zeros([self.batch_size, self.cell.state_size])  # ERROR IS IN THIS LINE
        self.encoder_states = [self.initial_state]
        self.encoder_outputs = []

Traceback:

WARNING:tensorflow:<tensorflow.python.ops.rnn_cell.LSTMCell object at 0x7f56e6c2cb10>: The input_size parameter is deprecated.
Traceback (most recent call last):
  File "train.py", line 194, in <module>
    main()
  File "train.py", line 63, in main
    model = create_model(sess, hyper_params, vocab_size)
  File "train.py", line 124, in create_model
    hyper_params["batch_size"])
  File "/home/datametica/karim/deeplearning/neural-sentiment/models/sentiment.py", line 73, in __init__
    self.initial_state = tf.zeros([self.batch_size, self.cell.state_size])
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/array_ops.py", line 1184, in zeros
    shape = ops.convert_to_tensor(shape, dtype=dtypes.int32, name="shape")
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 657, in convert_to_tensor
    ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/constant_op.py", line 180, in _constant_tensor_conversion_function
    return constant(v, dtype=dtype, name=name)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/constant_op.py", line 163, in constant
    tensor_util.make_tensor_proto(value, dtype=dtype, shape=shape))
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/tensor_util.py", line 354, in make_tensor_proto
    nparray = np.array(values, dtype=np_dt)
ValueError: setting an array element with a sequence.

Here is link to actual code - https://github.com/inikdom/neural-sentiment/blob/master/train.py

Is this error due to numpy version? Earlier my server numpy version was 1.11.2, so I uninstalled and install numpy 1.11.1

My local system has 1.11.1 which works fine without any error.

Referring to solution : tensorflow: ValueError: setting an array element with a sequence

I tried replacing tf with np, but it gave

WARNING:tensorflow:<tensorflow.python.ops.rnn_cell.LSTMCell object at 0x7f84f6f8e890>: The input_size parameter is deprecated.
Traceback (most recent call last):
  File "train.py", line 194, in <module>
    main()
  File "train.py", line 63, in main
    model = create_model(sess, hyper_params, vocab_size)
  File "train.py", line 124, in create_model
    hyper_params["batch_size"])
  File "/home/datametica/karim/deeplearning/neural-sentiment/models/sentiment.py", line 73, in __init__
    self.initial_state = np.zeros([self.batch_size, self.cell.state_size])
TypeError: an integer is required

Upvotes: 0

Views: 521

Answers (2)

user123
user123

Reputation: 5407

I thought it is due to numpy version. so I tried changing it, but no help. Also tried by changing code, with no luck.

What I found is, this code works well with tensorflow 0.8.0 .

If you install latest tensorflow and try this code, it will give this error.

So I uninstalled latest version and installed 0.8.0, now again it works fine.

Upvotes: 0

dm0_
dm0_

Reputation: 2156

I think the reason is state_is_tuple argument of MultiRNNCell constructor. It is true by default and in this case self.cell.state_size is a tuple.

Update

MultiRNNCell is cell made of several other cells. Thus state of MultiRNNCell composed of states of internal cells. The state_is_tuple argument of constructor controls if states of internal cells are joined into single tensor. If it is true then state_size of MultiRNNCell is a sum of state_size of internal cells (see source). Otherwise state_size is a tuple of sizes of internal cells.

In the later case you are passing [self.batch_size, <tuple>] as a shape to tf.zeros (or np.zeros).

I don't know why it works at your local system. I can only guess that in your system you use other version of tensorflow that has other default behavior.

Upvotes: 1

Related Questions