Marlon Teixeira
Marlon Teixeira

Reputation: 393

How to use Keras ImageDataGenerator for feeding a pix2pix CNN model?

I'm trying to use keras ImageDataGenerator for training a pix2pix CNN model. It maps input images to output images. We know that the keras ImageDataGenerator can be used easily for image classification, but I'm having problems to train a pix2pix model. Here is my attempt:

Custom generator:

class JoinedGen(tf.keras.utils.Sequence):
    def __init__(self, input_gen, target_gen):
        self.input_gen = input_gen
        self.target_gen = target_gen

        assert len(input_gen) == len(target_gen)

    def __len__(self):
        return len(self.input_gen)

    def __getitem__(self, i):
        x = self.input_gen[i]
        y = self.target_gen[i]

        return x, y

    def on_epoch_end(self):
        self.input_gen.on_epoch_end()
        self.target_gen.on_epoch_end()
        self.target_gen.index_array = self.input_gen.index_array

Implementation with ImageDataGenerator:

generator = ImageDataGenerator(shear_range=0.2,
       zoom_range=0.2,
       horizontal_flip=True,
       validation_split=0.3) 

  
input_gen = generator.flow_from_directory(path, 
                                          classes=['area'], 
                                          shuffle=False,
                                          target_size=(256, 256),   
                                          class_mode=None,
                                          batch_size=32,
                                          subset='training')

target_gen = generator.flow_from_directory(path, 
                                          classes=['sat'],
                                          shuffle=False, 
                                          target_size=(256, 256),
                                          class_mode=None,
                                          batch_size=32,
                                          subset='training')

input_gen_val = generator.flow_from_directory(path, 
                                          classes=['area'], 
                                          shuffle=False,
                                          target_size=(256, 256),   
                                          class_mode=None,
                                          batch_size=32,
                                          subset='validation')

target_gen_val = generator.flow_from_directory(path, 
                                          classes=['sat'],   
                                          shuffle=False, 
                                          target_size=(256, 256),
                                          class_mode=None,
                                          batch_size=32,
                                          subset='validation')

But when I ask for the first image of both training generators using input_gen.next()[0] and target_gen.next()[0] it doesn't give me the corresponding input and output!

Upvotes: 1

Views: 162

Answers (1)

Marlon Teixeira
Marlon Teixeira

Reputation: 393

As it is said in the Keras documentation the solution is to "provide the same seed and keyword arguments to the fit and flow methods - seed = 1".

Just add to the flow_from_directory method seed = 1.

Check out the link for more information here

Upvotes: 1

Related Questions