Brandon
Brandon

Reputation: 145

How to verify the actual data in Tensorflow slim dataset

I have encoded my data into tfrecord files. For each image, I encode multiple bounding boxes with multiple labels with it. Now, I want to verify whether my data have been correctly decoded by the Tensorflow/slim dataset type. I write a following test:

def test2(sess):
  labels_to_class = read_label_file(label_fname)
  reader = tf.TFRecordReader
  keys_to_features = {
    'image/encoded': tf.FixedLenFeature(
      (), tf.string, default_value=''),
    'image/format': tf.FixedLenFeature((), tf.string, default_value='jpg'),
    'image/object/labels': tf.VarLenFeature(dtype=tf.int64),
    'image/object/truns': tf.VarLenFeature(dtype=tf.int64),
    'image/object/occluds': tf.VarLenFeature(dtype=tf.int64),
    'image/object/bbox/xmin': tf.VarLenFeature(dtype=tf.int64),
    'image/object/bbox/xmax': tf.VarLenFeature(dtype=tf.int64),
    'image/object/bbox/ymin': tf.VarLenFeature(dtype=tf.int64),
    'image/object/bbox/ymax': tf.VarLenFeature(dtype=tf.int64),
  }
  items_to_handlers = {
    'image': slim.tfexample_decoder.Image('image/encoded', 'image/format'),
    'object/label': slim.tfexample_decoder.Tensor('image/object/labels'),
    'object/truncated': slim.tfexample_decoder.Tensor('image/object/truns'),
    'object/occluded': slim.tfexample_decoder.Tensor('image/object/occluds'),
    'object/bbox': slim.tfexample_decoder.BoundingBox(
        ['ymin', 'xmin', 'ymax', 'xmax'], 'image/object/bbox/'),
  }

  decoder = slim.tfexample_decoder.TFExampleDecoder(
    keys_to_features, items_to_handlers)

  dataset = slim.dataset.Dataset(
    data_sources=filename_queue,
    reader=reader,
    decoder=decoder,
    num_samples=sample_num,
    items_to_descriptions=_ITEMS_TO_DESCRIPTIONS,
    num_classes=_NUM_CLASSES,
    labels_to_names=labels_to_class)

  provider = slim.dataset_data_provider.DatasetDataProvider(dataset)

  keys = provider._items_to_tensors.keys()

  print(provider._num_samples)
  for item in provider._items_to_tensors:
    print(item, provider._items_to_tensors[item])

  [image, label] = provider.get(['image', 'object/label'])

  print('AAA')

  sess.run([image, label])

  print('BBB')

When I run the above codes, it shows:

6
image Tensor("case/If_2/Merge:0", shape=(?, ?, 3), dtype=uint8)
object/label Tensor("SparseToDense:0", shape=(?,), dtype=int64)
object/occluded Tensor("SparseToDense_1:0", shape=(?,), dtype=int64)
record_key Tensor("parallel_read/common_queue_Dequeue:0", dtype=string)
object/bbox Tensor("transpose:0", shape=(?, 4), dtype=int64)
object/truncated Tensor("SparseToDense_2:0", shape=(?,), dtype=int64)
AAA

Then the program stops there forever without providing any error message. The program has shown the correct example number (6) and the correct types of the tensors I encoded, but I still want to check the values in the tensors. Is there anyway that I can check their values?

Thank you for help.

-----------------Update--------------------

The codes I added are:

tf.train.start_queue_runners()

print('Start verification process..')

for i in range(provider._num_samples):
  [image, labelList, truncList, occList,
       boxList] = provider.get([
         'image', 'object/label', 'object/truncated',
         'object/occluded', 'object/bbox'])
  enc_image = tf.image.encode_jpeg(image)
  img, labels, truns, occluds, boxes = sess.run(
      [enc_image, labelList, truncList, occList, boxList])

  f = tf.gfile.FastGFile('out_%.2d.jpg' % i, 'wb')
  f.write(img)
  f.close()

  for j in range(labels.shape[0]):
    print('label=%d (%s), truc=%d, occluded=%d at [%d, %d, %d, %d]' % (
        labels[j], labels_to_class[labels[j]], truns[j], 
        occluds[j], boxes[j][0], boxes[j][1],
        boxes[j][2],  boxes[j][3]))

Upvotes: 1

Views: 1850

Answers (1)

Alexandre Passos
Alexandre Passos

Reputation: 5206

You probably need to start queue runners for the image and label to be evaluated.

Upvotes: 1

Related Questions