Capdi
Capdi

Reputation: 257

Image segmentation: Image and label IDs don't match in order to evaluate the results in prediction step

I have a dataset of images and image masks which feed a neural network. After training process, I want to evaluate the results visually. Thus, I developed a functionality in order to display the reference image, the related mask image and the predicted image in a 3 x 3 grid using Keras ImageDataGenerator class, Numpy and Matplotlib. But while the images are displayed, the reference image and the mask image are not related. They don't have the same ID.

For Instance the code can be displayed the following:

[ ref_image_21, mask_image_43, predicted_image ]
[ ref_image_3, mask_image_38, predicted_image ]
[ ref_image_200, mask_image_12, predicted_image ]

Here is the code:

from tensorflow.keras.preprocessing.image import ImageDataGenerator
import numpy as np

target_size = (512, 512)

image_datagen = ImageDataGenerator(rescale=1./255)
mask_datagen = ImageDataGenerator()
test_image_generator = image_datagen.flow_from_directory('path/to/val_imgs', target_size=target_size, class_mode=None, batch_size = 6)
test_mask_generator = mask_datagen.flow_from_directory('path/to/val_labels/', target_size=target_size, class_mode=None, batch_size = 6)

def combine_generator(gen1, gen2, batch_list=6,training=True):

    while True:
        image_batch, label_batch=next(gen1)[0], np.expand_dims(next(gen2)[0][:,:,0],axis=-1)
        image_batch, label_batch=np.expand_dims(image_batch,axis=0),np.expand_dims(label_batch,axis=0)

        for i in range(batch_list-1):
            image_i,label_i = next(gen1)[0], np.expand_dims(next(gen2)[0][:,:,0],axis=-1)
            image_i, label_i=np.expand_dims(image_i,axis=0),np.expand_dims(label_i,axis=0)
            image_batch=np.concatenate([image_batch,image_i],axis=0)
            label_batch=np.concatenate([label_batch,label_i],axis=0)
            
        yield((image_batch,label_batch))

test_generator = combine_generator(test_image_generator, test_mask_generator,training=True)

def show_predictions_in_test(model_name, generator=None, num=3):
    if generator ==None:
        generator = test_generator
    for i in range(num):
        image, mask=next(generator)
        sample_image, sample_mask= image[1], mask[1]
        image = np.expand_dims(sample_image, axis=0)
        pr_mask = model_name.predict(image)
        pr_mask=np.expand_dims(pr_mask[0].argmax(axis=-1),axis=-1)
        display([sample_image, sample_mask,pr_mask])
    
def display(display_list,title=['Input Image', 'True Mask', 'Predicted Mask']):
    plt.figure(figsize=(15, 15))
    for i in range(len(display_list)):
        plt.subplot(1, len(display_list), i+1)
        plt.title(title[i])
        plt.imshow(tf.keras.preprocessing.image.array_to_img(display_list[i]),cmap='magma')
        plt.axis('off')
    plt.show()

show_predictions_in_test(model)

What am I doing wrong ?

Upvotes: 1

Views: 581

Answers (1)

Capdi
Capdi

Reputation: 257

Finally I found the solution. I had to add and initialize the seed parameter in test_image_generator and test_mask_generator. Thus, if we replace the lines below:

test_image_generator = image_datagen.flow_from_directory('path/to/val_imgs', target_size=target_size, class_mode=None, batch_size = 6)
test_mask_generator = mask_datagen.flow_from_directory('path/to/val_labels/', target_size=target_size, class_mode=None, batch_size = 6)

with:

seed = np.random.randint(0,1e5)
test_image_generator = image_datagen.flow_from_directory('path/to/val_imgs/', seed=seed, target_size=target_size, class_mode=None, batch_size = 6)
test_mask_generator = mask_datagen.flow_from_directory('path/to/val_labels/', seed=seed, target_size=target_size, class_mode=None, batch_size = 6)

the code above is working and displays the images as the following:

[ ref_image_21, mask_image_21, predicted_image ]
[ ref_image_3, mask_image_3, predicted_image ]
[ ref_image_200, mask_image_200, predicted_image ]

Upvotes: 2

Related Questions