March Ho
March Ho

Reputation: 420

OpenCV Simple Blob Detector not detecting all blobs

I am trying to port one of my image analysis scripts from Mathematica to Python OpenCV, but I am having trouble with one of the functions involved.

I managed to binarise and watershed the image, much like one does in Mathematica. However, the steps to filter the properties of the connected components seem to be not working correctly.

The input image is below:

Input image

However, I attempted to run the following code:

import cv2
import numpy as np

img = cv2.imread('test2.4.png', 1)
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

# Set up the detector and configure its params.
params = cv2.SimpleBlobDetector_Params()
params.minDistBetweenBlobs = 0
params.filterByColor = True
params.blobColor = 255
params.filterByArea = True
params.minArea = 10
params.maxArea = 300000
params.filterByCircularity = False
params.filterByConvexity = False
params.filterByInertia = True
params.minInertiaRatio = 0.01
params.maxInertiaRatio = 1
detector = cv2.SimpleBlobDetector_create(params)

# Detect blobs.
keypointsb = detector.detect(img)

# Draw detected blobs as red circles.
im_with_keypoints = cv2.drawKeypoints(img, keypointsb, np.array([]), (0,0,255), cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)

# Show keypoints
cv2.imwrite('test3.png',im_with_keypoints)

As seen in the code, I have set the parameters for the blob detection to be as permissive as possible. However, a large proportion of the blobs are not detected, and none of the watershed-split blobs were detected either.

I have checked the documentation for the function and tweaked most of them with the exception of thresholds and repeatability (as the image is already binarised). Is there any other configuration that I should perform in order for the function to detect all of the blobs present?

enter image description here

Alternatively, are there any other recent/well updated libraries that are capable of filtering by component measurements?

Upvotes: 2

Views: 5102

Answers (1)

C. Wang
C. Wang

Reputation: 31

I know it's been long, but I have similar task with you here. Interested in how you use a width=1 line to separate those connected blobs.

However, I played a little while with the SimpleBlobDetector and what it does is briefly the following steps (from reading its source code):

  1. binarize the image using different thresholds from minThreshold to maxThreshold with step thresholdStep
  2. find contours in each binarized image, in here applying the filters, e.g. area, color, circularity, convexity, inertia, etc.
  3. combine all filtered contours according to their positions, i.e. distance larger than minDistBetweenBlobs and not overlapping
  4. store and return keypoints for all blobs(contours) kept

So I checked each step of the SimpleBlobDetector by using the following simple code, and found out that the width=1 lines you used to separate the connected blobs were found as individual contours/blobs (the contours/blobs showed in red and the centers of the contours/blobs showed in green in the attached image), especially for the non-horizontal/vertical lines (1 pixel contours). Those small contours then were filtered out by either minArea or blobColor = 255. That's why your splitted blobs were detected as bigger blobs.

import cv2
import numpy as np

img = cv2.imread('test3.png', 1)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

ret, bin_img = cv2.threshold(gray, 127, 255, cv2.THRESH_BINARY)

out_img = img
temp_bin_img = bin_img.copy()
ret, contours = cv2.findContours(temp_bin_img, cv2.RETR_LIST, cv2.CHAIN_APPROX_NONE)
for i in range(len(contours)):
    M = cv2.moments(contours[i])
    if(M['m00'] == 0.0):
        continue
    x, y = int(M['m10'] / M['m00']), int(M['m01'] / M['m00'])

    out_img = cv2.drawContours(out_img, contours, i, (0,0,255), 1)
    cv2.circle(out_img, (x, y), 1, (0,255,0), -1)

cv2.imwrite('test3-contours.png', out_img)

test3-contours.png

To improve, probably try erosion first to increase the width of the boundaries, then use SimpleBlobDetector or using findContours yourself. Like this:

import cv2
import numpy as np

img = cv2.imread('YUSAQ.png', 1)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

ret, bin_img = cv2.threshold(gray, 127, 255, cv2.THRESH_BINARY)

kernel = np.ones((3,3),np.uint8)
erosion = cv2.erode(bin_img, kernel, iterations = 1)

out_img = img
temp_bin_img = erosion.copy()
ret, contours = cv2.findContours(temp_bin_img, cv2.RETR_LIST, cv2.CHAIN_APPROX_NONE)
for i in range(len(contours)):
    M = cv2.moments(contours[i])
    if(M['m00'] == 0.0):
        continue
    x, y = int(M['m10'] / M['m00']), int(M['m01'] / M['m00'])

    out_img = cv2.drawContours(out_img, contours, i, (0,0,255), 1)
    cv2.circle(out_img, (x, y), 1, (0,255,0), -1)

cv2.imwrite('test3-erosion.png', out_img)

Using a 3x3 kernel to do the erosion results in the found blobs 1~2 pixel smaller than the original blobs. I didn't do correction for this (not even tried to think of it). If you would like to, I think you could do it yourself. Hope this helps.

test3-erosion.png

Upvotes: 3

Related Questions