Reputation: 2013
I'm trying to clean some chest X-ray data to feed to a CNN. In my dataset are currently many images where the bones are shown in white (higher pixel value than the background), like so:
While others show the bones in a darker color than the background, like this:
Can you show me a way to label the two? I have no other external info about the image, though it can be assumed they are the same size (
Assuming they have the same size (about 1000x2000) and that the first row of pixels has more than 1 different values (i.e. is not a blank border), I've written this simple code to compare a middle-ish pixel to the top-left one (likely to be part of the background).
if img[0,0] > img[500, 500]: # if background lighter than center
img = 255 - img # make the image negative
As you can see even from these samples I posted, this comparison is not always a good indicator (sometimes there is a halo round the background or pixel in [500,500] can be similar to background). Is there some more reliable other way to detect if an image of this kind is negative or not?
Consider that in the dataset are some images with very few details and shading, such as
Upvotes: 0
Views: 870
Reputation: 2013
Following the suggestion from Christoph Rackwitz, I get good result with this approach:
def invert_if_negative(img):
img = my_contrast_stretch(img)
# assuming image has fixed size of (1396, 1676)
# corners
top_left = img[:200, :200].flatten()
top_right = img[:200, 1250:].flatten()
# more or less center
center = img[1000:1300, 500:800].flatten()
threshold = 120 # or computed from average
top_left = top_left > threshold
top_right = top_right > threshold
center = center > threshold
perc_white_corners = (sum(top_left) + sum(top_right)) / (len(top_left) + len(top_right))
perc_white_center = sum(center) / len(center)
if perc_white_corners > perc_white_center:
img = 255 - img
return img
def my_contrast_stretch(img):
if img.dtype == np.float64:
img = (img * 255).astype(np.uint8)
M=np.max(img)
m=np.min(img)
res = img - m
res = res * (255 / (M - m))
return res.astype(np.uint8)
Upvotes: 0
Reputation: 5805
A possible solution involves equalizing the input image, then just thresholding applying a fixed threshold value. We can estimate the number of white pixels and compare against a threshold to decide if a correction needs to be applied.
Let's see the code:
# Imports:
import numpy as np
import cv2
# Image path
path = "D://opencvImages//"
fileName = "RPWBn.png"
# Reading an image in default mode:
inputImage = cv2.imread(path + fileName)
# Convert RGB to grayscale:
originalGrayscale = cv2.cvtColor(inputImage, cv2.COLOR_BGR2GRAY)
# Equalize histogram
grayscaleImage = cv2.equalizeHist(originalGrayscale)
# It might be interesting to you to check out the image equalization:
cv2.imshow("Image Equalized", grayscaleImage)
cv2.waitKey(0)
# Binarize the image with a fixed threshold:
minThresh = 128
_, binaryImage = cv2.threshold(grayscaleImage, minThresh, 255, cv2.THRESH_BINARY)
# Compute the percent of white pixels:
(imageHeight, imageWidth) = binaryImage .shape[:2]
whitePercent = cv2.countNonZero(binaryImage)/(imageHeight * imageWidth)
Then, we check this value against a threshold, to see if we must apply the correction. You have the option to correct the original image and the equalized one:
if whitePercent > 0.5:
print("Correcting images...")
# Correct the original (unequalized) image:
originalGrayscale = 255 - originalGrayscale
cv2.imshow("Correction - Original Image", originalGrayscale)
# Correct the equalized image:
grayscaleImage = 255 - grayscaleImage
cv2.imshow("Correction - Equalized Image", grayscaleImage )
cv2.waitKey(0)
The second image is corrected. Here are the images for both possible results:
Original inverted:
Equalized inverted:
Now, in addition to the image inversion, you might need to do some additional post-processing to improve the brightness and contrast of the original. We can achieve this using the CLAHE method. Let's post-process the original, unequalized image:
# Improve the brightness + contrast of the original image via
# CLAHE.
# Gray to BGR conversion:
originalGrayscale = cv2.cvtColor(originalGrayscale , cv2.COLOR_GRAY2BGR)
# Conversion to LAB:
lab = cv2.cvtColor(originalGrayscale, cv2.COLOR_BGR2LAB)
# Split the channels:
l, a, b = cv2.split(lab)
# Apply CLAHE to L-channel:
# You might need to fiddle with the parameters:
clahe = cv2.createCLAHE(clipLimit=7.0, tileGridSize=(1, 1))
cl = clahe.apply(l)
# Merge the CLAHE enhanced L-channel with the a and b channel:
limg = cv2.merge((cl, a, b))
# Conversion from LAB to BGR:
final = cv2.cvtColor(limg, cv2.COLOR_LAB2BGR)
cv2.imshow("Original Corrected and Enhanced", final)
cv2.waitKey(0)
This is the enhanced image:
Upvotes: 5