Reputation: 27
I am working on LSTM based models. the data consists of 80000 images. I am using batchsize of 1 and getting the following as error log:
OutOfRangeError (see above for traceback): PaddingFIFOQueue '_1_Train_data/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0) [[Node: Train_data/batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_STRING, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](Train_data/batch/padding_fifo_queue, Train_data/batch/n)]]
Can someone suggests what can be the possible issue? as the fifoqueue size is shown as 0 for all possible batch sizes I tried.
Upvotes: 0
Views: 897
Reputation: 27
some of the images were corrupted in the database which was causing the program to run into this error. Removed those images and now working fine.
Upvotes: 0
Reputation: 486
The error doesn't have to do with LSTM, you are getting it from tf.train.batch
.
You have to initialize your TF local variable along with global variables.
From this open issue https://github.com/tensorflow/tensorflow/issues/1045, it seems that the order of initialization matters.
global_init_op = tf.global_variables_initializer()
local_init_op = tf.local_variables_initializer()
with tf.Session() as sess:
sess.run(global_init_op)
sess.run(local_init_op)
# rest of your code
Upvotes: 1