neouyghur
neouyghur

Reputation: 1647

slim inception_v4 retrain ValueError: All shapes must be fully defined

I got the following error

ValueError: All shapes must be fully defined: [TensorShape([Dimension(299), Dimension(299), Dimension(3)]), TensorShape([Dimension(None)])]

when training the inception_v4 with slim.

Full traceback

Traceback (most recent call last):
  File "../models/slim/train_vienna_classifier.py", line 575, in <module>
    tf.app.run()
  File "/home/osman/anaconda2/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 44, in run
    _sys.exit(main(_sys.argv[:1] + flags_passthrough))
  File "../models/slim/train_vienna_classifier.py", line 441, in main
    capacity=5 * FLAGS.batch_size)
  File "/home/osman/anaconda2/lib/python2.7/site-packages/tensorflow/python/training/input.py", line 872, in batch
    name=name)
  File "/home/osman/anaconda2/lib/python2.7/site-packages/tensorflow/python/training/input.py", line 658, in _batch
    capacity=capacity, dtypes=types, shapes=shapes, shared_name=shared_name)
  File "/home/osman/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/data_flow_ops.py", line 685, in __init__
    shapes = _as_shape_list(shapes, dtypes)
  File "/home/osman/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/data_flow_ops.py", line 77, in _as_shape_list
    raise ValueError("All shapes must be fully defined: %s" % shapes)
ValueError: All shapes must be fully defined: [TensorShape([Dimension(299), Dimension(299), Dimension(3)]), TensorShape([Dimension(None)])]

The code

with tf.device(deploy_config.inputs_device()):
  provider = slim.dataset_data_provider.DatasetDataProvider(
      dataset,
      num_readers=FLAGS.num_readers,
      common_queue_capacity=20 * FLAGS.batch_size,
      common_queue_min=10 * FLAGS.batch_size)
  [image, label] = provider.get(['image', 'label'])
  label -= FLAGS.labels_offset

  train_image_size = FLAGS.train_image_size or network_fn.default_image_size

  image = image_preprocessing_fn(image, train_image_size, train_image_size)
  images, labels = tf.train.batch(
      [image, label],
      batch_size=FLAGS.batch_size,
      num_threads=FLAGS.num_preprocessing_threads,
      capacity=5 * FLAGS.batch_size)
  labels = slim.one_hot_encoding(
      labels, dataset.num_classes - FLAGS.labels_offset)
  batch_queue = slim.prefetch_queue.prefetch_queue(
      [images, labels], capacity=2 * deploy_config.num_clones)

Although the sizes of images in the dataset are different, I am using the given preprocessing function for resizing them, therefore it should not return the error. Am I correct?

Upvotes: 2

Views: 748

Answers (1)

Vijay Mariappan
Vijay Mariappan

Reputation: 17201

The issue is not with the images but the labels as its shape is not defined: [TensorShape([Dimension(299), Dimension(299), Dimension(3)]), TensorShape([Dimension(None)])]. The second tensor dimension is shown as None. So setting the labels to the correct shape should fix this issue.

Use the tf.reshape() function to set the shape of the labels.

Upvotes: 1

Related Questions