Niko Gamulin
Niko Gamulin

Reputation: 66585

How to detect circular erosion/dilation

I would like to detect circular erosions and dilations on a line. For dilations, I tried to recursively erode the image and on every recursion, I check width/height aspect ratio. If the ratio is smaller than 4, I assume that it the contour is circular and for each such contour I calculate circle center and radius from moments and area. This is the function that detects circular dilations:

def detect_circular_dilations(img, contours):
    contours_current, hierarchy = cv2.findContours(img, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
    if len(contours_current) == 0:
        return get_circles_from_contours(contours)
    for c in contours_current:
        x, y, w, h = cv2.boundingRect(c)
        if w > h:
            aspect_ratio = float(w) / h
        else:
            aspect_ratio = float(h) / w
        if aspect_ratio < 4 and w < 20 and h < 20 and w > 5 and h > 5:
            contours.append(c)
    return detect_circular_dilations(cv2.erode(img, None, iterations=1), contours)

An example of circular dilations that I want to detect are the following:

Circular Dilation

Another problem that I haven't solve is the detection of circular erosions. An example of circular erosion is the following:

Circular Erosion

Here I've marked the circular erosion I would like to detect with red rectangle. There might be some smaller circular patterns (on the left) that shouldn't be treated as actual circular erosion.

Does anyone know what is the best way to detect such circular shapes? For circular dilations, I would appreciate any comment/suggestion in order to potentially make detection more robust.

Thank you!

Upvotes: 2

Views: 1390

Answers (2)

kavko
kavko

Reputation: 2841

What I would try is to find two edges of the line with cv2.Canny() and search for contours. If you sort your contour by the width of their bounding box, the first two contours will be your lines edges. After that you can calculate the minimum distance of each point in one edge to the other edge. Then you can calculate the median of the distances and say that if a point has bigger or shorter distance than the median (+- tolerance) than that point is ether the dilation or erosion of the line and append it to a list. You can sort out noises if needed by itterating through the lists and remove the points if they are not consecutive (on x axis).

Here is a simple example:

import cv2
import numpy as np
from scipy import spatial

def detect_dilation(median, mindist, tolerance):
    count = 0
    for i in mindist:
        if i > median + tolerance:
            dilate.append((reshape_e1[count][0], reshape_e1[count][1]))
        elif i < median - tolerance:
            erode.append((reshape_e1[count][0], reshape_e1[count][1]))
        else:
            pass
        count+=1

def other_axis(dilate, cnt):
    temp = []
    for i in dilate:
        temp.append(i[0])
    for i in cnt:
        if i[0] in temp:
            dilate.append((i[0],i[1]))

img = cv2.imread('1.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(gray,100,200)
_, contours, hierarchy = cv2.findContours(edges,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
contours.sort(key= lambda cnt :cv2.boundingRect(cnt)[3])
edge_1 = contours[0]
edge_2 = contours[1]
reshape_e1 = np.reshape(edge_1, (-1,2))
reshape_e2 =np.reshape(edge_2, (-1,2))
tree = spatial.cKDTree(reshape_e2)
mindist, minid = tree.query(reshape_e1)
median = np.median(mindist)
dilate = []
erode = []
detect_dilation(median,mindist,5)
other_axis(dilate, reshape_e2)
other_axis(erode, reshape_e2)

dilate = np.array(dilate).reshape((-1,1,2)).astype(np.int32)
erode = np.array(erode).reshape((-1,1,2)).astype(np.int32)
x,y,w,h = cv2.boundingRect(dilate)
cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
x,y,w,h = cv2.boundingRect(erode)
cv2.rectangle(img,(x,y),(x+w,y+h),(0,0,255),2)

cv2.imshow('img', img)
cv2.waitKey(0)
cv2.destroyAllWindows()

Result:

enter image description here

Edit:

If the picture has a line that is broken (that means more contours) you would have to treat each contour as a seperate line. You could achieve this by making a region of interest with the help of cv2.boundingRect(). But as I tried it with the new uploaded picture the process is not very robust since you have to change the tolerance to get the desired result. Since I don't know what other images look like, you may need a better way to get the average distance and the tolerance factor. Any way here is a sample of what I described (with 15 for tolerance):

import cv2
import numpy as np
from scipy import spatial

def detect_dilation(median, mindist, tolerance):
    count = 0
    for i in mindist:
        if i > median + tolerance:
            dilate.append((reshape_e1[count][0], reshape_e1[count][1]))
        elif i < median - tolerance:
            erode.append((reshape_e1[count][0], reshape_e1[count][1]))
        else:
            pass
        count+=1

def other_axis(dilate, cnt):
    temp = []
    for i in dilate:
        temp.append(i[0])
    for i in cnt:
        if i[0] in temp:
            dilate.append((i[0],i[1]))

img = cv2.imread('2.jpg')
gray_original = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
_, thresh_original = cv2.threshold(gray_original, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)

# Filling holes
_, contours, hierarchy = cv2.findContours(thresh_original,cv2.RETR_CCOMP,cv2.CHAIN_APPROX_SIMPLE)
for cnt in contours:
    cv2.drawContours(thresh_original,[cnt],0,255,-1)
_, contours, hierarchy = cv2.findContours(thresh_original,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_NONE)

for cnt in contours:
    x2,y,w2,h = cv2.boundingRect(cnt)
    thresh = thresh_original[0:img.shape[:2][1], x2+20:x2+w2-20] # Region of interest for every "line"
    edges = cv2.Canny(thresh,100,200)
    _, contours, hierarchy = cv2.findContours(edges,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
    contours.sort(key= lambda cnt: cv2.boundingRect(cnt)[3])
    edge_1 = contours[0]
    edge_2 = contours[1]
    reshape_e1 = np.reshape(edge_1, (-1,2))
    reshape_e2 =np.reshape(edge_2, (-1,2))
    tree = spatial.cKDTree(reshape_e2)
    mindist, minid = tree.query(reshape_e1)
    median = np.median(mindist)
    dilate = []
    erode = []
    detect_dilation(median,mindist,15)
    other_axis(dilate, reshape_e2)
    other_axis(erode, reshape_e2)
    dilate = np.array(dilate).reshape((-1,1,2)).astype(np.int32)
    erode = np.array(erode).reshape((-1,1,2)).astype(np.int32)
    x,y,w,h = cv2.boundingRect(dilate)
    if len(dilate) > 0:
        cv2.rectangle(img[0:img.shape[:2][1], x2+20:x2+w2-20],(x,y),(x+w,y+h),(255,0,0),2)
    x,y,w,h = cv2.boundingRect(erode)
    if len(erode) > 0:
        cv2.rectangle(img[0:img.shape[:2][1], x2+20:x2+w2-20],(x,y),(x+w,y+h),(0,0,255),2)

cv2.imshow('img', img)
cv2.waitKey(0)
cv2.destroyAllWindows()

Result:

enter image description here

Upvotes: 1

Cris Luengo
Cris Luengo

Reputation: 60799

Problems like this are often solved using the distance transform and the medial axis transform. These are related in a way, as the medial axis runs along the ridge of the distance transform. The general idea is:

  1. Compute the distance transform of the image (for each foreground pixel, returns the distance to the nearest background pixel; some libraries implement this the other way, in which case you need to compute the distance transform of the inverted image).

  2. Compute the medial axis (or skeleton).

  3. The values of the distance transform along the medial axis are the relevant values, we ignore all other pixels. Here we see the local radius of the line.

  4. Local maxima are centroids of the dilations. Use a threshold to determine which of these are important dilations and which ones are not (a noisy outline will cause many local maxima).

  5. Local minima are centroids of the erosions.

For example, I got the following output using the MATLAB code below.

detected dilations

Here is the code I used. It uses MATLAB with DIPimage 3, just as a quick proof of principle. This should be straightforward to translate to Python with whatever image processing library you like to use.

% Read in image and remove the red markup:
img = readim('https://i.sstatic.net/bNOTn.jpg');
img = img{3}>100;
img = closing(img,5);

% This is the algorithm described above:
img = fillholes(img);               % Get rid of holes
radius = dt(img);                   % Distance transform
m = bskeleton(img);                 % Medial axis
radius(~m) = 0;                     % Ignore all pixels outside the medial axis
detection = dilation(radius,25)==radius & radius>25; % Local maxima with radius > 25
pos = findcoord(detection);         % Coordinates of detections
radius = double(radius(detection)); % Radii of detections

% This is just to make the markup:
detection = newim(img,'bin');
for ii=1:numel(radius)
   detection = drawshape(detection,2*radius(ii),pos(ii,:),'disk');
end
overlay(img,detection)

Upvotes: 0

Related Questions