user2097439
user2097439

Reputation: 207

Improving Flood-fill Result Under Sensible Light Conditions with OpenCV

I am trying to do contour detection on image with tools, due to the metal nature of the tools they tend to reflect the light and produce glares. In order to limit this effect I am using floodFill, but this is very sensitive to the parameters.

For example here is my original image:

original image

and this is the flooded one

image after flooding

Which looks great, not I try with a second image:

image 2 with different lightning condition

As you can see on the flooded image some tools are not correctly handled and as such the contour detection will fail to give a good result.

enter image description here

Here is the updated version:

import numpy as np
import cv2


def getFilteredLabelIndex(stats, widthLowLimit=50, heightLowLimit=50, areaLowLimit=7000):
    ret = []
    for i in range(1, stats.shape[0]):
        # extract the connected component statistics for the current label
        w = stats[i, cv2.CC_STAT_WIDTH]
        h = stats[i, cv2.CC_STAT_HEIGHT]
        area = stats[i, cv2.CC_STAT_AREA]

        keepWidth = w > widthLowLimit
        keepHeight = h > heightLowLimit
        keepArea = area > areaLowLimit

        if all((keepWidth, keepHeight, keepArea)):
            ret.append(i)

    return ret

# load our input image, convert it to grayscale, and blur it slightly
impath = "q8djf.png"
originalImage = cv2.imread(impath)

birdEye = originalImage 

seed = (35, 35)
originalImage = np.maximum(originalImage, 10)
foreground = originalImage.copy()

# Use floodFill for filling the background with black color
cv2.floodFill(foreground, None, seed, (0, 0, 0),
              loDiff=(5, 5, 5), upDiff=(5, 5, 5))
              
cv2.imshow("foreground", foreground)

gray = cv2.cvtColor(foreground, cv2.COLOR_BGR2GRAY)
cv2.imshow("gray", gray)

threshImg = cv2.threshold(gray, 1, 255, cv2.THRESH_BINARY)[1]
cv2.imshow("threshImg", threshImg)

(numLabels, labels, stats, centroids) = cv2.connectedComponentsWithStats(
    threshImg, 4, cv2.CV_32S)

filteredIdx = getFilteredLabelIndex(stats)

for i in filteredIdx:
    componentMask = (labels == i).astype("uint8") * 255

    kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5, 5))
    componentMask = cv2.dilate(componentMask, kernel, iterations=3)

    ctrs, _ = cv2.findContours(
        componentMask, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
    ctrs = sorted(ctrs, key=cv2.contourArea, reverse=True)
    cntrs = max(ctrs, key=cv2.contourArea)

    cv2.drawContours(birdEye, [cntrs], -1, (255, 0, 255), 3)
    cv2.imshow("contour", birdEye)

cv2.imshow("original contour", birdEye)
cv2.waitKey(0)
cv2.destroyAllWindows()

Any suggestion would be welcome.

Upvotes: 0

Views: 600

Answers (1)

Rotem
Rotem

Reputation: 32144

Sharpening the input image may help.

The sharpening operation enlarges the difference between the object and the background.

I am not sure how robust it's going to be for other images, but it's improve the robustness of the current solution.

I used the sharpening solution from the following post: How can I sharpen an image in OpenCV?, but with different parameters for stronger sharpening.

The following code demonstrates the solution:

# https://stackoverflow.com/questions/4993082/how-can-i-sharpen-an-image-in-opencv
# Sharpen the image
blur = cv2.GaussianBlur(birdEye, (0, 0), 3)
sharp_foreground = cv2.addWeighted(birdEye, 2, blur, -1, 0);
sharp_foreground = np.maximum(sharp_foreground, 10)

# Use floodFill for filling the background with black color
# Use loDiff and upDiff (10, 10, 10) instead of (5, 5, 5) for exaggerating the problem
cv2.floodFill(sharp_foreground, None, seed, (0, 0, 0),
              loDiff=(10, 10, 10), upDiff=(10, 10, 10))

With sharpening (sharp_foreground):
enter image description here

Without sharpening:
enter image description here

Note:
The example uses loDiff=(10, 10, 10) and upDiff=(10, 10, 10) just for demonstration.
Try keeping the values lower.

Upvotes: 1

Related Questions