csej
csej

Reputation: 55

Tensorflow Dataset API Evaluating Output Shapes takes more than 10 minutes

I'm working with Python 3.5, low-shot MicroSoft Celeb1M dataset, Tensorflow 1.4, and I want to use the new Dataset API on an image classification task.

I need to build a Dataset (an episode) that has this form : it contains (N*k + 1) images, with N the number of different classes, and k the number of samples from each class. The goal is to classify the last image in the correct class among the N classes, each represented by k samples.

For that, I have 16 000 tfrecords on a hard drive, each about 20 MB. Each TFRecord contains the images for a class, about 50-100 images.

I want to choose N files randomly, then k images from each randomly, mix them, and choose a final image to classify among the N class, different from the samples. To do that, I did a mix between "native" Python code and Tensorflow Dataset API methods.

The problem is that the solution I wrote takes painfully long to complete. Here's the working code I wrote to create such a dataset. For this example, I only take 20 files from the hard drive.

import tensorflow as tf
import os
import time
import numpy.random as rng

#Creating a few variables
data_dir = '/fastdata/Celeb1M/'
test_data = [data_dir + 'test/'+ elt for elt in os.listdir(data_dir + '/test/')]

# Function to decode TFRecords
def read_and_decode(example_proto):
    features = tf.parse_single_example(
            example_proto,
            features = {
                'image': tf.FixedLenFeature([], tf.string),
                'label': tf.FixedLenFeature([], tf.int64),
                'height': tf.FixedLenFeature([], tf.int64),
                'width': tf.FixedLenFeature([], tf.int64),
                'channels': tf.FixedLenFeature([], tf.int64)
            })

    image = tf.decode_raw(features['image'], tf.uint8)
    image = tf.cast(image, tf.float32) * (1. / 255)
    height = tf.cast(features['height'], tf.int32)
    width = tf.cast(features['width'], tf.int32)
    channels = tf.cast(features['channels'], tf.int32)
    image = tf.reshape(image, [height, width, channels])
    label = tf.cast(features['label'], tf.int32)

    return image, label

def get_episode(classes_per_set, samples_per_class, list_files):
    """
    :param data_pack : train, val or test
    :param classes_per_set : N-way classification
    :param samples_per_class : k-shot classification
    :param list_files : list of length classes_per_set of files containing examples
    :return : an episode containing classes_per_set * samples_per_class + 1 image to classify among the N*k other
    """
    assert classes_per_set == len(list_files)

    dataset = tf.data.TFRecordDataset(list_files[-1]).map(read_and_decode) \
              .shuffle(100)
    elt_to_classify = dataset.take(1)
    rng.shuffle(list_files)
    episode = tf.data.TFRecordDataset([list_files[-1]]) \
              .map(read_and_decode) \
              .shuffle(100) \
              .take(1)

    _ = list_files.pop()

    for class_file in list_files:
        element = tf.data.TFRecordDataset([class_file]) \
                  .map(read_and_decode) \
                  .shuffle(150) \
                  .take(1)
        episode = episode.concatenate(element)

    episode = episode.concatenate(elt_to_classify)
    return episode

#Testing the code
episode = get_episode(20, 1, test_data)
start = time.time()
iterator = episode.make_one_shot_iterator()
end = time.time()

print("time elapsed: ", end - start)

"""
Result :
starting to build one_shot_iterator
time elapsed:  188.75095319747925
"""

The step that takes too long is the iterator initialization. On my full code, which consist of batching episodes, it takes about 15 minutes. I noticed that the issue is most likely due to evaluating episode.output_shapes : just doing at the end a print(episode.output_shapes) also takes a long time (but less than initializing an iterator).

Moreover, I work in a Docker, and when the iterator is initializing, I can see that the CPU is at 100 % during the whole step.

I was wondering if the cause for that was the mix between native Python Code, and Tensorflow operations, which could cause a bottleneck on the CPU.

I thought that dealing with the Dataset API consisted of creating Operation nodes on the Tensorflow Graph, and that the Dataset was only evaluated when doing a tf.Session().run().

For more information, I tried :

episode = dataset.get_episode(50, 1, test_data[:50])
iterator = episode.make_one_shot_iterator()

After 3 hours, it didn't even end. I stopped the code, and here is the TraceBack (I edited out some blocks that repeated such as the return self._as_variant_tensor() :

KeyboardInterrupt              Traceback (most recent call last)
<ipython-input-8-550523c179b3> in <module>()
    2 print("there")
    3 start = time.time()
        ----> 4 iterator = episode.make_one_shot_iterator()
    5 end = time.time()
    6 print("time elapsed: ", end - start)

~/miniconda2/envs/dljupyter/lib/python3.5/site-packages/tensorflow/python/data/ops/dataset_ops.py in make_one_shot_iterator(self)
    110       return self._as_variant_tensor()  # pylint: disable=protected-access
    111 
--> 112     _make_dataset.add_to_graph(ops.get_default_graph())
    113 
    114     return iterator_ops.Iterator(

~/miniconda2/envs/dljupyter/lib/python3.5/site-packages/tensorflow/python/framework/function.py in add_to_graph(self, g)
    484   def add_to_graph(self, g):
    485     """Adds this function into the graph g."""
--> 486     self._create_definition_if_needed()
    487 
    488     # Adds this function into 'g'.

~/miniconda2/envs/dljupyter/lib/python3.5/site-packages/tensorflow/python/framework/function.py in _create_definition_if_needed(self)
    319     """Creates the function definition if it's not created yet."""
    320     with context.graph_mode():
--> 321       self._create_definition_if_needed_impl()
    322 
    323   def _create_definition_if_needed_impl(self):

~/miniconda2/envs/dljupyter/lib/python3.5/site-packages/tensorflow/python/framework/function.py in _create_definition_if_needed_impl(self)
    336       # Call func and gather the output tensors.
    337       with vs.variable_scope("", custom_getter=temp_graph.getvar):
--> 338         outputs = self._func(*inputs)
    339 
    340       # There is no way of distinguishing between a function not returning

~/miniconda2/envs/dljupyter/lib/python3.5/site-packages/tensorflow/python/data/ops/dataset_ops.py in _make_dataset()
    108     @function.Defun(capture_by_value=True)
    109     def _make_dataset():
--> 110       return self._as_variant_tensor()  # pylint: disable=protected-access
    111 
    112     _make_dataset.add_to_graph(ops.get_default_graph())

~/miniconda2/envs/dljupyter/lib/python3.5/site-packages/tensorflow/python/data/ops/dataset_ops.py in _as_variant_tensor(self)
    998     # pylint: disable=protected-access
    999     return gen_dataset_ops.concatenate_dataset(
-> 1000         self._input_dataset._as_variant_tensor(),
   1001         self._dataset_to_concatenate._as_variant_tensor(),
   1002         output_shapes=nest.flatten(self.output_shapes),


~/miniconda2/envs/dljupyter/lib/python3.5/site-packages/tensorflow/python/data/ops/dataset_ops.py in output_shapes(self)
   1006   @property
   1007   def output_shapes(self):
-> 1008     return nest.pack_sequence_as(self._input_dataset.output_shapes, [
   1009         ts1.most_specific_compatible_shape(ts2)
   1010         for (ts1, ts2) in zip(

~/miniconda2/envs/dljupyter/lib/python3.5/site-packages/tensorflow/python/data/ops/dataset_ops.py in output_shapes(self)
   1009         ts1.most_specific_compatible_shape(ts2)
   1010         for (ts1, ts2) in zip(
-> 1011             nest.flatten(self._input_dataset.output_shapes),
   1012             nest.flatten(self._dataset_to_concatenate.output_shapes))
   1013     ])

~/miniconda2/envs/dljupyter/lib/python3.5/site-packages/tensorflow/python/data/ops/dataset_ops.py in output_shapes(self)
   1009         ts1.most_specific_compatible_shape(ts2)
   1010         for (ts1, ts2) in zip(
-> 1011             nest.flatten(self._input_dataset.output_shapes),
   1012             nest.flatten(self._dataset_to_concatenate.output_shapes))
   1013     ])


~/miniconda2/envs/dljupyter/lib/python3.5/site-packages/tensorflow/python/data/util/nest.py in pack_sequence_as(structure, flat_sequence)
    239     return flat_sequence[0]
    240 
--> 241   flat_structure = flatten(structure)
    242   if len(flat_structure) != len(flat_sequence):
    243     raise ValueError(

~/miniconda2/envs/dljupyter/lib/python3.5/site-packages/tensorflow/python/data/util/nest.py in flatten(nest)
    133     A Python list, the flattened version of the input.
    134   """
--> 135   return list(_yield_flat_nest(nest)) if is_sequence(nest) else [nest]
    136 
    137 

~/miniconda2/envs/dljupyter/lib/python3.5/site-packages/tensorflow/python/data/util/nest.py in is_sequence(seq)
    118   """
    119   return (isinstance(seq, (_collections.Sequence, dict))
--> 120           and not isinstance(seq, (list, _six.string_types)))
    121 
    122 

KeyboardInterrupt: 

So I'd like to know why initializing the iterator takes so long : I haven't been able to find much information on how the initialization works, and what exactly is evaluated when creating the Graph.

I haven't been able to achieve what I want through purely tf.data.Dataset methods, but I haven't tried yet the tf.data.Dataset.flat_map()/interleave() methods (which were used in this thread).

Upvotes: 1

Views: 448

Answers (1)

mrry
mrry

Reputation: 126194

The code is so expensive because it loops over the 16000 files in Python, creating O(16000) nodes in the graph. However, you can avoid this by using Dataset.flat_map() to move the loop into the graph:

def get_episode(classes_per_set, samples_per_class, list_files):
    """
    :param data_pack : train, val or test
    :param classes_per_set : N-way classification
    :param samples_per_class : k-shot classification
    :param list_files : list of length classes_per_set of files containing examples
    :return : an episode containing classes_per_set * samples_per_class + 1 image to classify among the N*k other
    """
    assert classes_per_set == len(list_files)

    elt_to_classify = tf.data.TFRecordDataset(list_files[-1]).map(read_and_decode) \
                      .shuffle(100) \
                      .take(1)

    rng.shuffle(list_files)

    # Special handling for the first file (smaller shuffle buffer).
    first_file = tf.data.TFRecordDataset([list_files[-1]]) \
                 .map(read_and_decode) \
                 .shuffle(100) \
                 .take(1)

    _ = list_files.pop()

    # Creates a nested dataset for each file in `list_files`, and 
    # concatenates them together.
    other_files = tf.data.Dataset.from_tensor_slices(list_files).flat_map(
        lambda filename: tf.data.TFRecordDataset(filename)
                         .map(read_and_decode)
                         .shuffle(150)
                         .take(1))

    episode = first_file.concatenate(other_files).concatenate(elt_to_classify)
    return episode

Upvotes: 1

Related Questions