Loki
Loki

Reputation: 71

What is the right way to normalize satellite images to feed into a Nerual network?

I am trying to feed small patches of satellite image data (landsat-8 Surface Reflectance Bands) into neural networks for my project. However the downloaded image values range from 1 to 65535.

So I tried dividing images by 65535(max value) but plotting them shows all black/brown image like this!

rgb image normalized by max possible value

But most of the images do not have values near 65535

Without any normalization the image looks all white.

Un-Normalized rgb image plotted with matplotlib imshow

Dividing the image with 30k looks like this.

enter image description here

If the images are too dark or too light my network may not perform as intended.

Is dividing the image with max value possible (65535) the only solution or are there any other ways to normalize images especially for satellite data?

Upvotes: -1

Views: 1207

Answers (2)

iohans
iohans

Reputation: 858

There are other ways to normalize images. Standardization is the most common way (subtract the mean and divide by the standard deviation).

Using numpy...

image = (image - np.mean(image)) / np.std(image)

As I mentioned in a clarifying comment, you want the normalization method to match how the NN training set.

Upvotes: 0

Pranav Prakash
Pranav Prakash

Reputation: 19

There is a really good blog by Sentinel Hub on different normalization schemes that can be used to normalize satellite images https://medium.com/sentinel-hub/how-to-normalize-satellite-images-for-deep-learning-d5b668c885af

Main issue with these data is that they are heavy-tailed and therefore require special treatment to handle pixel values that lie in the tail end as they can greatly influence the normalization values. In the blog you can see that normalization based on min/max is the worst possible option and any scheme that are based on percentiles tend to perform well as they can handle outliers. In my personal experience, dynamic world method of using log normalization has given the best result because you perform normalization in the log space which converts the heavy-tailed distribution to a normal distribution.

Also, please note that even though the image is 16 bit, the almost all of the pixel values will be much smaller than 65535. This is because satellite images are reflectance values which are bounded away from 0. Majority of the objects on earth including vegetation don't have high reflectance.

Upvotes: 0

Related Questions