Kaschi14
Kaschi14

Reputation: 116

"Smart" Bit depth (bits per pixel) reduction in medical images

I want to train a CNN using mammography (greyscale) DICOM images that have a 14-bit color range per pixel. To train the CNN I want to reduce the images to 8-bit. I tried to do that by simply calculating:

scaled_img = (img_arr / 2**14) * 255

However, the pixel distribution for an expletory input image is not close to being equally distributed as visible in below image:

Image

So with the linear transformation above, I lose a lot of information, resulting in the breast on the output image being completely white (no or minimal color graduation observable).

I'm looking for a smart way to reduce the bit-depth but keep most of the information.

What I tried so far is to set all the pixels that are greater than the 95% quantile to the 95% quantile and then instead of dividing by 2**14 I divide by the 95% quantile.
This way I cut out rare grayscale information but increase the contrast in the 8-bit image of the reduction because a smaller color range is mapped. With the reduction we map 16384 (14 bit) to 256 (8 bit) resulting in 64 grayscale values mapped to 1 grayscale value, with the 95% quantile (6000) it results in only ~23 grayscale values to 1 grayscale value.

My approach is tedious and also loses information. I'm looking for an algorithm that can easily be applied on many images. I did find this, but it only works with 12-bit images.

Upvotes: 2

Views: 123

Answers (0)

Related Questions