Reputation: 226
I want to read the dataset generated by this code with the tf.data.Dataset
api. The repo shows it was written like this:
def image_to_tfexample(image_data, image_format, height, width, class_id):
return tf.train.Example(features=tf.train.Features(feature={
'image/encoded': bytes_feature(image_data),
'image/format': bytes_feature(image_format),
'image/class/label': int64_feature(class_id),
'image/height': int64_feature(height),
'image/width': int64_feature(width),
}))
with (encoded byte-string, b'png', 32, 32, label)
as parameters.
So, to read the .tfrecord file, the data format would have to be:
example_fmt = {
'image/encoded': tf.FixedLenFeature((), tf.string, ""),
'image/format': tf.FixedLenFeature((), tf.string, ""),
'image/class/label': tf.FixedLenFeature((), tf.int64, -1),
'image/height': tf.FixedLenFeature((), tf.int64, -1),
'image/width': tf.FixedLenFeature((), tf.int64, -1)
}
parsed = tf.parse_single_example(example, example_fmt)
image = tf.decode_raw(parsed['image/encoded'], out_type=tf.uint8)
But it doesn't work. The dataset is empty after reading and generating an iterator with it raises OutOfRangeError: End of sequence
.
A short python script for reproduction can be found here. I'm struggling to find exact documentation or examples for this problem.
Upvotes: 1
Views: 2982
Reputation: 816
This question is a little old, but it helped me to read and load tagged images (tagged with VoTT) for training YOLOv4/v3. Maybe this code is another "example" that might help someone:
def load_single_boxed_tfrecord(record):
"""
Loads a single tfrecord with its boundary boxes and corresponding labels, from a single tfrecord.
Args:
record: as tfrecord (Tensor), as yielded from tf.data.TFRecordDataset
Returns:
(Tensor of image), (Tensor of labels), (Tensor of: x_top_left, x_lower_right, y_top_left, y_lower_right)
"""
feature = {'image/encoded': tf.io.FixedLenFeature([], tf.string),
'image/object/class/label': tf.io.VarLenFeature(tf.int64),
'image/object/bbox/xmin': tf.io.VarLenFeature(tf.float32),
'image/object/bbox/ymin': tf.io.VarLenFeature(tf.float32),
'image/object/bbox/xmax': tf.io.VarLenFeature(tf.float32),
'image/object/bbox/ymax': tf.io.VarLenFeature(tf.float32),
'image/filename': tf.io.FixedLenFeature([], tf.string),
'image/width': tf.io.FixedLenFeature([], tf.int64),
'image/height': tf.io.FixedLenFeature([], tf.int64)
}
tf_file = tf.io.parse_single_example(record, feature)
tf_img = tf.image.decode_image(tf_file["image/encoded"], channels=COLOR_CHANNELS)
tf_img = tf.image.convert_image_dtype(tf_img, tf.float32)
label = tf.sparse.to_dense(tf_file['image/object/class/label'], default_value=0)
# normalized values:
x1norm = tf.sparse.to_dense(tf_file['image/object/bbox/xmin'], default_value=0)
x2norm = tf.sparse.to_dense(tf_file['image/object/bbox/xmax'], default_value=0)
y1norm = tf.sparse.to_dense(tf_file['image/object/bbox/ymin'], default_value=0)
y2norm = tf.sparse.to_dense(tf_file['image/object/bbox/ymax'], default_value=0)
return tf_img, label, [x1norm, x2norm, y1norm, y2norm]
Upvotes: 1
Reputation: 826
I'm still learning TensorFlow and tfrecordfile usage so I'm not a master of these things, but I've found this guide that was useful in my case and might be useful also for you.
Upvotes: 0
Reputation: 5936
I can't test your code because I don't have the train.tfrecords file. Does this code create an empty dataset?
dataset = tf.data.TFRecordDataset('train.tfrecords')
dataset = dataset.map(parse_fn)
itr = dataset.make_one_shot_iterator()
with tf.Session() as sess:
while True:
try:
print(sess.run(itr.get_next()))
except tf.errors.OutOfRangeError:
break
If this gives you an error, please let me know which line produces it.
Upvotes: 1