Reputation: 33
here's an issue: I want to find actual maximum width of boundary in segmented image with irregular shape (this) below I post some example image I use for testing So far I managed to obtain boundaries and skeleton line, but how do I measure distance between contours perpendicural to the skeleton line?
def get_skeleton(image_path):
im = cv2.imread(img_path , cv2.IMREAD_GRAYSCALE)
binary = im > filters.threshold_otsu(im)
skeleton = morphology.skeletonize(binary)
return skeleton
skeleton = get_skeleton(img_path)
plt.imshow(skeleton, cmap="gray")
def get_boundary(image_path):
reading_Img = cv2.imread(image_path, cv2.IMREAD_GRAYSCALE)
reading_Img = cv2.cvtColor(reading_Img,cv2.COLOR_BGR2RGB)
canny_Img = cv2.Canny(reading_Img,90,100)
contours,_ = cv2.findContours(canny_Img,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
canvas = np.zeros_like(reading_Img)
boundary = cv2.drawContours(canvas , contours, -1, (255, 0, 0), 1)
return boundary
boundary = get_boundary(img_path)
plt.imshow(boundary)
EDIT:
First of all thanks for your answer, I would like to add more detail on what I am trying to do. So I made a segmentation model which detects cracks in concrete (they can be any shape, vertical, horizontal, diagonal, etc) and now I need to identify their max-width and draw a line that shows where it occurs.
I found that the medial axis returns the distance from the boundary and by filtering max value I was able to get the width (see colab below) and its coordinate on the medial axis. Now I need to draw a line connecting the width between boundaries, but I have no idea on how to find the coordinates of such a line.
I thought of an algorithm which starts at the point of max distance occurrence on medial axis and expands until it finds a boundary, but I don't know how to implement it.
This image shows what I need to have:
After I find x and y of points I will be able to calculate euclidean distance between 2 points
dist=sqrt((y2-y1)^2+(x2-x1)^2)
Please look at my code in colab notebook: https://colab.research.google.com/drive/1NvEyfrxpKGJ1kxjP48PGNB_UUSp6f6Ze?usp=sharing
sample input images:
https://i.sstatic.net/6b5Mu.jpg
https://i.sstatic.net/bPiHM.jpg
https://i.sstatic.net/TMpxX.jpg
Upvotes: 3
Views: 775
Reputation: 6288
Starting with your approach using the medial axis function you can
The example below shows the principle and works with your example images. But I'm sure there will be some boundary conditions that are not yet considered. I leave it to you to get it robust against real live data.
import cv2
import numpy as np
from skimage.morphology import medial_axis
from skimage import img_as_ubyte
delta = 3 # delta index for interpolation
# get crack
im = cv2.imread("img.png", cv2.IMREAD_GRAYSCALE)
rgb = cv2.cvtColor(im, cv2.COLOR_GRAY2RGB) # rgb just for demo purpose
_, crack = cv2.threshold(im, 127, 255, cv2.THRESH_BINARY)
# get medial axis
medial, distance = medial_axis(im, return_distance=True)
med_img = img_as_ubyte(medial)
med_contours, _ = cv2.findContours(med_img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
cv2.drawContours(rgb, med_contours, -1, (255, 0, 0), 1)
med_pts = [v[0] for v in med_contours[0]]
# get point with maximal distance from medial axis
max_idx = np.argmax(distance)
max_pos = np.unravel_index(max_idx, distance.shape)
max_dist = distance[max_pos]
coords = np.array([max_pos[1], max_pos[0]])
print(f"max distance from medial axis to boundary = {max_dist} at {coords}")
# interpolate orthogonal of medial axis at coords
idx = next(i for i, v in enumerate(med_pts) if (v == coords).all())
px1, py1 = med_pts[(idx-delta) % len(med_pts)]
px2, py2 = med_pts[(idx+delta) % len(med_pts)]
orth = np.array([py1 - py2, px2 - px1]) * max(im.shape)
# intersect orthogonal with crack and get contour
orth_img = np.zeros(crack.shape, dtype=np.uint8)
cv2.line(orth_img, coords + orth, coords - orth, color=255, thickness=1)
gap_img = cv2.bitwise_and(orth_img, crack)
gap_contours, _ = cv2.findContours(gap_img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
gap_pts = [v[0] for v in gap_contours[0]]
# determine the end points of the gap contour by negative dot product
n = len(gap_pts)
gap_ends = [
p for i, p in enumerate(gap_pts)
if np.dot(p - gap_pts[(i-1) % n], gap_pts[(i+1) % n] - p) < 0
]
print(f"Maximum gap found from {gap_ends[0]} to {gap_ends[1]}")
cv2.line(rgb, gap_ends[0], gap_ends[1], color=(0, 0, 255), thickness=1)
cv2.imwrite("test_out.png", rgb)
Upvotes: 2
Reputation:
From the pixel with the largest distance, you can explore concentric square layers, until you find a background pixel. Then find the background pixel with the shortest Euclidean distance on the last layer. The second endpoint is symmetrical.
Upvotes: 0
Reputation: 3647
First thing I did was keep your images greyscale, there is no need to covert to 3 channels to find contours. Second was to convert the boundary
image to a binary so that it is the same as the skeleton
image. Then I simply added the two to get the both
image.
I then clocked through each row (as you are looking for perpendicular distances) of the combined both
image & looked for elements that where True
i.e that are either boundary or skeleton pixels. I made a simplifying assumption at this point - I only searched for cases where there is a boundary followed by a single skeleton pixel then by a second boundary, I appreciate that this may not always be the case but I leave that particular headache for you to sort out.
After that its just a case of keeping track of he max & min distances recorded as you go through the image row by row. (edit: there maybe a cleaner way to do this than the way I've done it but hopefully you get the idea)
import numpy as np
import matplotlib.pyplot as plt
import cv2
from skimage import filters
from skimage import morphology
def get_skeleton(image_path):
im = cv2.imread(image_path , cv2.IMREAD_GRAYSCALE)
binary = im > filters.threshold_otsu(im)
skeleton = morphology.skeletonize(binary)
return skeleton
def get_boundary(image_path):
reading_Img = cv2.imread(image_path, cv2.IMREAD_GRAYSCALE)
canny_Img = cv2.Canny(reading_Img, 90, 100)
contours,_ = cv2.findContours(canny_Img,cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
canvas = np.zeros_like(reading_Img)
boundary = cv2.drawContours(canvas, contours, -1, (255, 0, 0), 1)
binary = boundary > filters.threshold_otsu(boundary)
return binary
skeleton = get_skeleton("LtqlM.png")
boundary = get_boundary("LtqlM.png")
both = skeleton + boundary
max_dist = 0
min_dist = 100000
for idx in range(both.shape[0]): # counting through rows
row = both[idx, :]
lines = np.where(row==True)[0]
if len(lines) == 3:
dist_1 = lines[1] - lines[0]
dist_2 = lines[2] - lines[1]
if (dist_1 > dist_2) and dist_1 > max_dist:
max_dist = dist_1
if (dist_2 > dist_1) and dist_2 > max_dist:
max_dist = dist_2
if (dist_1 < dist_2) and dist_1 < min_dist:
min_dist = dist_1
if (dist_2 < dist_1) and dist_2 < min_dist:
min_dist = dist_2
print("Maximum distance = ", max_dist)
print("Minimum distance = ", min_dist)
plt.imshow(both)
Upvotes: 1