SparkierFlunky
SparkierFlunky

Reputation: 494

Train and Test with TFRecord Data

I created a TFRecord file for my Image data and am able to load it and train my network with it.

height = 28
width = 28

tfrecords_train_filename = '../train-00000-of-00001'
tfrecords_test_filename = '../test-00000-of-00001'

def read_and_decode(filename_queue):
    reader = tf.TFRecordReader()
    _, serialized_example = reader.read(filename_queue)

    features = tf.parse_single_example(
        serialized_example,
        features={
            'image/class/label': tf.FixedLenFeature([], tf.int64),
            'image/encoded': tf.FixedLenFeature([], dtype=tf.string, default_value='')
    })

    image_buffer = features['image/encoded']
    image_label = tf.cast(features['image/class/label'], tf.int32)

    with tf.name_scope('decode_jpeg', [image_buffer], None):
        image = tf.image.decode_jpeg(image_buffer, channels=3)
        image = tf.image.convert_image_dtype(image, dtype=tf.float32)
        image = tf.image.rgb_to_grayscale(image)

    image_shape = tf.stack([height, width, 1])

    image = tf.reshape(image, image_shape)

    return image, image_label

def inputs(filename, batch_size, num_epochs):
    if not num_epochs: num_epochs = None

    with tf.name_scope('input'):
        filename_queue = tf.train.string_input_producer([filename], num_epochs=None)

        image, label = read_and_decode(filename_queue)

        images, sparse_labels = tf.train.shuffle_batch(
            [image, label], batch_size=batch_size, num_threads=2,
            capacity=1000 + 3 * batch_size,
            min_after_dequeue=1000)

    return images, sparse_labels

image, label = inputs(filename=tfrecords_train_filename, batch_size=200, num_epochs=None)
image = tf.reshape(image, [-1, 784])
label = tf.one_hot(label - 1, 10)

# Create the model
x = tf.placeholder(tf.float32, [None, 784])
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
y = tf.matmul(x, W) + b
y_ = tf.placeholder(tf.float32, [None, 10])
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())

    coord = tf.train.Coordinator()
    threads = tf.train.start_queue_runners(coord=coord)

    correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

    for i in range(1000):
        img, lbl = sess.run([image, label])
        sess.run(train_step, feed_dict={x: img, y_: lbl})

    img, lbl = sess.run([image, label])
    print(sess.run(accuracy, feed_dict={x: img, y_: lbl}))

    coord.request_stop()
    coord.join(threads)

The first function is basically for loading the TFRecord File and converting the data back to image data. Then in inputs, the data gets shuffled into batches.

I do now want to have the training data being evaluated regularly on the network while training. For this I would like to have something similar to test_image, test_label = inputs(filename=tfrecords_test_filename, batch_size=20, num_epochs=None). However, it seems to overwrite my previously defined queue and thus throw an OutOfRangeError. I was reading about the possibility to do this using shared variables, I don't get how to implament this though. Is it even the right way to go? How can I evaluate the network periodically?

Upvotes: 4

Views: 7186

Answers (2)

SparkierFlunky
SparkierFlunky

Reputation: 494

What I ended up doing was first, merging the input and read_and_decode into one function as:

def _parse_function(proto):
    features={
        'image/class/label': tf.FixedLenFeature([], tf.int64),
        'image/encoded': tf.FixedLenFeature([], dtype=tf.string, 
            default_value='')
    }
    parsed_features = tf.parse_single_example(proto, features)
    image_buffer = parsed_features['image/encoded']
    image_label = tf.cast(parsed_features['image/class/label'], tf.int32)
    with tf.name_scope('decode_jpeg', [image_buffer], None):
        image = tf.image.decode_jpeg(image_buffer, channels=3)
        image = tf.image.convert_image_dtype(image, dtype=tf.float32)
        image = tf.image.rgb_to_grayscale(image)

    image_shape = tf.stack([height, width, 1])

    image = tf.reshape(image, image_shape)
    image = tf.reshape(image, [784])
    image_label = tf.one_hot(image_label - 1, 10)

    return image, image_label

and then handling the dataset like this:

# Training Dataset
train_dataset = tf.contrib.data.TFRecordDataset(['train'])
# Parse the record into tensors.
train_dataset = train_dataset.map(_parse_function)
train_dataset = train_dataset.shuffle(buffer_size=10000)
train_dataset = train_dataset.batch(200)
# Validation Dataset
validation_dataset = tf.contrib.data.TFRecordDataset(['validation'])
validation_dataset = validation_dataset.map(_parse_function)
validation_dataset = validation_dataset.batch(200)
handle = tf.placeholder(tf.string, shape=[])
iterator = tf.contrib.data.Iterator.from_string_handle(handle, 
    train_dataset.output_types, train_dataset.output_shapes)
next_element = iterator.get_next()
training_iterator = train_dataset.make_initializable_iterator()
validation_iterator = validation_dataset.make_one_shot_iterator()

This is a very convenient way to get TFRecord data into a Dataset. I could then switch during training simply with:

with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
training_handle = sess.run(training_iterator.string_handle())
validation_handle = sess.run(validation_iterator.string_handle())

# Compute for 10 epochs.
for _ in range(10):
    sess.run(training_iterator.initializer)
    while True:
        try:
            img, lbl = sess.run(next_element, 
                feed_dict={handle: training_handle})
            sess.run(train_step, feed_dict={x: img, y_: lbl})
        except tf.errors.OutOfRangeError:
            img, lbl = sess.run(next_element, 
                feed_dict={handle: validation_handle})
            print sess.run(accuracy, feed_dict={x: img, y_: lbl})
            break

What is still missing in this implementation is, that I don't run over the whole evaluation set. This would be a minor modification, however.

Upvotes: 4

Engineero
Engineero

Reputation: 12908

Check out the section on feedable iterators here. I think it might be what you are looking for. This is using the Dataset API, but I think it parallels the TFRecord API. I am not positive about that.

The gist, taken largely from the documentation linked previously:

# Define training and test datasets with the same structure.
training_data = tf.contrib.data.Dataset.(whatever)
test_data = tf.contrib.data.Dataset.(something_else)

# Feedable iterators use a handle placeholder.
handle = tf.placeholder(tf.string, shape=[])
iterator = tf.contrib.data.Iterator.from_string_handle(
        handle,
        training_data.output_types,
        training_data.output_shapes)
next_element = iterator.get_next()

# You need iterators for each dataset to feed your feedable iterator.
# This gets a little wonky.
training_iterator = training_data.make_one_shot_iterator()
test_iterator = test_data.make_initiailizable_iterator()

# Use `Iterator.string_handle()` to get the value for your `handle`
# placeholder.
training_handle = sess.run(training_iterator.string_handle())
test_handle = sess.run(test_iterator.string_handle())

# Finally run your training/testing. Say you want to train for 100
# steps, then test for 50 iterations, then repeat 10 times. And you
# want to reset your test iterator with every outer loop.
for _ in range(10):
    for _ in range(100):
        sess.run(next_element, feed_dict={handle: training_handle})
    sess.run(test_iterator.initializier)
    for _ in range(50):
        sess.run(next_element, feed_dict={handle: test_handle})

Looking at it a bit more I am not sure this will help you. I will leave it up until I hear feedback either way.

Upvotes: 1

Related Questions