Reputation: 21
In order to create a n. network that processes images I have to standardize the photos present in the tensorflow dataset called "stanford_dogs" relating to photos of different breeds of dogs.
I tried different techniques to standardize the images and, of those that work (limited to the syntax compatible with Tensorflow Datasets), they all lead to the same result: very dark images.
Here is an example of 9 images before and after standardization:
Here is the pixel value of a random image before and after standardization:
Here is the actual code I use to import the dataset and then to standardize the images.
ds = tfds.load('stanford_dogs', split='train', shuffle_files=True)
ds_test = tfds.load('stanford_dogs', split='test', shuffle_files=True)
ds = ds.concatenate(ds_test)
def resize_example(example, size):
image = tf.image.resize(example['image'], [size, size])
return {'image': image, 'label': example['label']}
ds = ds.map(lambda example: resize_example(example, 300))
ds = ds.map(lambda x: {'image': tf.image.per_image_standardization(x['image']), 'label': x['label']})
I clumsily tried to change the color channel from rgb to others (e.g. LAB or HSV) without success. As well as I tried to standardize one of the three dimensions individually (leaving the other two unstandardized) but the photos were equally dark after standardization.
I would like to standardize the photo pixels between 0 and 1 without the image darkening (making their application in networks simple)
Upvotes: 1
Views: 64