Peter
Peter

Reputation: 806

How can I save and read variable size images in TensorFlow's protobuf format

I'm trying to write variable size images in TensorFlow's protobuf format with the following code:

img_feature = tf.train.Feature(
    bytes_list=tf.train.BytesList(value=[
        img.flatten().tostring()]))
# Define how the sequence length is stored
seq_len_feature = tf.train.Feature(
    int64_list=tf.train.Int64List(value=[seq_len]))
# Define how the label list is stored
label_list_feature = tf.train.Feature(
    int64_list=tf.train.Int64List(value=label_list))
# Define the feature dictionary that defines how the data is stored
feature = {
    IMG_FEATURE_NAME: img_feature,
    SEQ_LEN_FEATURE_NAME: seq_len_feature,
    LABEL_LIST_FEATURE_NAME: label_list_feature}
# Create an example object to store
example = tf.train.Example(
    features=tf.train.Features(feature=feature))

Where the images img that I save has a fixed height but variable length.

Now if I want to parse this image with the following code:

# Define how the features are read from the example
features_dict = {
  IMG_FEATURE_NAME: tf.FixedLenFeature([], tf.string),
  SEQ_LEN_FEATURE_NAME: tf.FixedLenFeature([1], tf.int64),
  LABEL_LIST_FEATURE_NAME: tf.VarLenFeature(tf.int64),
}
features = tf.parse_single_example(
    serialized_example,
    features=features_dict)
# Decode string to uint8 and reshape to image shape
img = tf.decode_raw(features[IMG_FEATURE_NAME], tf.uint8)
img = tf.reshape(img, (self.img_shape, -1))
seq_len = tf.cast(features[SEQ_LEN_FEATURE_NAME], tf.int32)
# Convert list of labels
label_list = tf.cast(features[LABEL_LIST_FEATURE_NAME], tf.int32)

I get the following error: ValueError: All shapes must be fully defined: [TensorShape([Dimension(28), Dimension(None)]), TensorShape([Dimension(1)]), TensorShape([Dimension(3)])]

Is there a way to store images with variable size (more specifically variable width in my case) and read them with TFRecordReader?

Upvotes: 0

Views: 1261

Answers (2)

Peter
Peter

Reputation: 806

I was able to make it work eventually with the following code to create the protobuf data file:

_, img_png = cv2.imencode('.png', img)
img_png = img_png.tostring()
label_list_feature = [
    tf.train.Feature(bytes_list=tf.train.BytesList(value=[label]))
    for label in label_list]
img_feature = tf.train.Feature(bytes_list=tf.train.BytesList(
        value=[img_png]))
# Define feature for sequence length
seq_len_feature = tf.train.Feature(
    int64_list=tf.train.Int64List(value=[seq_len]))
# Feature list that contains list of labels
feature_list = {
    LABEL_LIST_FEATURE_NAME: tf.train.FeatureList(
        feature=label_list_feature)
}
# Context that contains sequence lenght and image
context = tf.train.Features(feature={
    IMG_FEATURE_NAME: img_feature,
    SEQ_LEN_FEATURE_NAME: seq_len_feature
})
feature_lists = tf.train.FeatureLists(feature_list=feature_list)
# Add sequence length as context
example = tf.train.SequenceExample(
    feature_lists=feature_lists,
    context=context)

And the following code to read from the protobuf:

# Sequence length is a context feature
context_features = {
    IMG_FEATURE_NAME: tf.FixedLenFeature([], dtype=tf.string),
    SEQ_LEN_FEATURE_NAME: tf.FixedLenFeature([], dtype=tf.int64)
}
# Image and target word is a sequence feature
sequence_features = {
    LABEL_LIST_FEATURE_NAME: tf.FixedLenSequenceFeature(
        [], dtype=tf.string)
}
# Parse the example
context_parsed, sequence_parsed = tf.parse_single_sequence_example(
    serialized=serialized_example,
    context_features=context_features,
    sequence_features=sequence_features
)
seq_len = tf.cast(context_parsed[SEQ_LEN_FEATURE_NAME], tf.int32)
# Process the image
img = context_parsed[IMG_FEATURE_NAME]
img = tf.image.decode_png(img, dtype=tf.uint8, channels=nb_channels)
img = tf.reshape(img, (img_height, -1, nb_channels))
labels = sequence_parsed[LABEL_LIST_FEATURE_NAME]
return img, seq_len, labels

Note: in this example I changes my list of integer labels to a list of string labels (which in my case are more natural). I'm also storing the images a png byte string.

Upvotes: 1

Alexander Gorban
Alexander Gorban

Reputation: 1238

First, I was not able to reproduce the error. The following code works just fine:

import tensorflow as tf
import numpy as np

image_height = 100
img = np.random.randint(low=0, high=255, size=(image_height,200), dtype='uint8')
IMG_FEATURE_NAME = 'image/raw'

with tf.Graph().as_default():
  img_feature = tf.train.Feature(
      bytes_list=tf.train.BytesList(value=[
          img.flatten().tostring()]))
  feature = {IMG_FEATURE_NAME: img_feature}

  example = tf.train.Example(features=tf.train.Features(feature=feature))
  serialized_example = example.SerializeToString()

  features_dict = {IMG_FEATURE_NAME: tf.FixedLenFeature([], tf.string)}
  features = tf.parse_single_example(serialized_example, features=features_dict)
  img_tf = tf.decode_raw(features[IMG_FEATURE_NAME], tf.uint8)
  img_tf = tf.reshape(img_tf, (image_height, -1))

  with tf.Session() as sess:
    img_np = sess.run(img_tf)

  print(img_np)

print('Images are identical: %s' % (img == img_np).all())

It outputs:

Images are identical: True

Second, I'd recommend to store images encoded as PNG instead of RAW and read them using tf.VarLenFeature+tf.image.decode_png. It will save you a lot of space and naturally supports variable size images.

Upvotes: 0

Related Questions