Lutfi Lais
Lutfi Lais

Reputation: 43

Getting cleaner blobs for counting

still on my journey of learning image masking.

Im trying to count the number of red dots in an image.

Here is the input image image

After masking red, I get this image after masking

The problem is, some of the blobs aren't full, so it does not count all the blobs, for example in this specific image, it does not count number 6 and 9. (assuming top left is 1)

How do I refine the masking process to get a more accurate blob?

Masking Code:

import cv2, os
import numpy as np

os.chdir('C:\Program Files\Python\projects\Blob')

#Get image input
image_input = cv2.imread('realbutwithacrylic.png')
image_input = np.copy(image_input)
rgb = cv2.cvtColor(image_input, cv2.COLOR_BGR2RGB)

#Range of color wanted
lower_red = np.array([125, 1, 0])
upper_red = np.array([200, 110, 110])

#Masking the Image
first_mask = cv2.inRange(rgb, lower_red, upper_red)

#Output
cv2.imshow('first_mask', first_mask)
cv2.waitKey()

Masking Code with Blob Counter

import cv2, os
import numpy as np

#Some Visual Studio Code bullshit because it cant find the image????
os.chdir('C:\Program Files\Python\projects\Blob')

#Get image input
image_input = cv2.imread('realbutwithacrylic.png')
image_input = np.copy(image_input)
rgb = cv2.cvtColor(image_input, cv2.COLOR_BGR2RGB)

#Range of color wanted
lower_red = np.array([125, 1, 0])
upper_red = np.array([200, 110, 110])

#Masking the Image
first_mask = cv2.inRange(rgb, lower_red, upper_red)

#Initial masking counter
cv2.imshow('first_mask', first_mask)
cv2.waitKey()

#Blob Counter
thresh = cv2.threshold(first_mask,0,255,cv2.THRESH_OTSU + cv2.THRESH_BINARY)[1]
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (7,7))
opening = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel, iterations=5)

cnts = cv2.findContours(opening, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]

#Couting the blobs
blobs = 0
for c in cnts:
    area = cv2.contourArea(c)
    cv2.drawContours(first_mask, [c], -1, (36,255,12), -1)
    if area > 13000:
        blobs += 2
    else:
        blobs += 1

#Blob Number Output
print('blobs:', blobs)

#Masking Output
cv2.imshow('thresh', thresh)
cv2.imshow('opening', opening)
cv2.imshow('image', image_input)
cv2.imshow('mask', first_mask)
cv2.waitKey()

Upvotes: 3

Views: 381

Answers (3)

Renat Gilmanov
Renat Gilmanov

Reputation: 17895

While both answers provide a proper solution it might be important to mention the following:

  • Cris Luengo managed to provide noise-free mask which is way more easier to deal with
  • Both solutions in a way introduce color difference/delta_E, which is important to mention (might require additional dependency, but will definitely simplify everything)
  • it might not be that critical to have those red markers (see example below), but good to have in terms of reliability

Just a small PoC (no code, using custom segmentation pipeline):

enter image description here

and mask:

enter image description here

If you think delta_E is an overkill simply check several examples with dynamic scenes and changing light conditions. Any attempt to achieve that hardcoding specific colors will likely to fail.

Upvotes: 0

Cris Luengo
Cris Luengo

Reputation: 60761

@AKX has a good suggestion, but I would prefer HSI (as described in A. Hanbury and J. Serra, “Colour image analysis in 3D-polar coordinates”, Joint Pattern Recognition Symposium, 2003), which is typically more suited for image analysis than HSV. Note that this is not the same as another common conversion often also referred to as HSI, which involves an arc cosine operation -- this HSI does not involve trigonometry. For details, if you don't have access to the paper above, see an implementation in C++.

Also, the Gaussian blur should be quite a bit stronger. You have a JPEG-compressed image, with pretty strong compression. JPEG destroys colors, because we're not good at seeing color edges. Our best solution for this image is to apply a lot of smoothing. The better solution would be to improve the imaging, of course.

A proper threshold on the hue channel should allow us to exclude all the orange, which has a different hue than red (which is, by definition, close to 0 degrees). We also must exclude pixels with a low saturation, as some of the dark areas could have a red hue.

I'm showing how to do this with DIPlib because I'm familiar with it (disclosure: I'm an author). I'm sure you can do the same things with OpenCV, though you might need to implement the HSI color space conversion from scratch.

import diplib as dip

img = dip.ImageRead('aAvJj.jpg')
img = dip.Gauss(img, 2)    # sigma = 2
hsi = dip.ColorSpaceManager.Convert(img,'hsi')
h = hsi(0)
s = hsi(1)
h = (h + 180) % 360 - 180  # turn range [180,360] into [-180,0]
dots = (dip.Abs(h) < 5) & (s > 45)

the images in the code above

To count the dots you can now simply:

lab = dip.Label(dots)
print(dip.MaximumAndMinimum(lab)[1])

...which says 10.

Upvotes: 2

AKX
AKX

Reputation: 169338

Since you're looking for bright enough reds, you might have a better time masking things in HSV space:

orig_image = cv2.imread("realbutwithacrylic.jpg")

image = orig_image.copy()
# Blur image to get rid of noise
image = cv2.GaussianBlur(image, (3, 3), cv2.BORDER_DEFAULT)
# Convert to hue-saturation-value
h, s, v = cv2.split(cv2.cvtColor(image, cv2.COLOR_BGR2HSV))
# "Roll" the hue value so reds (which would otherwise be at 0 and 255) are in the middle instead.
# This makes it easier to use `inRange` without needing to AND masks together.
image = cv2.merge(((h + 128) % 255, s, v))
# Select the correct hues with saturated-enough, bright-enough colors.
image = cv2.inRange(image, np.array([40, 128, 100]), np.array([140, 255, 255]))

For your image, the output is

enter image description here

which should be more straightforward to work with.

Upvotes: 5

Related Questions