Reputation: 61289
I have the following skeleton:
From this image I'd like to eliminate lines that are not part of loops.
I imagine this as a process in which ends of lines are found (marked with red dots) and the lines are gobbled up until there's a point where they branch (marked with blue dots).
I haven't found an operation for this in OpenCV or Scikit-Image.
Is there a name for such a transform? Is there a way to implement it in Python that would work efficiently?
I've also uploaded the image here in case the above image doesn't load correctly.
Upvotes: 1
Views: 666
Reputation: 61289
I haven't found a good way to do this in Python using existing libraries (though I hope someone is able to point me one), nor the name of this.
So I've decided to call this the Fuse Transform, since the action of the algorithm is similar to burning the lines away like fuses until they split.
I've implemented the Fuse Transform below as a Cython function for efficiency.
The algorithm requires a single O(N) time in the size of the matrix to sweep to identify seed cells (those cells that are at the start of a fuse) and then O(N) time in the total length of the fuses to eliminate the lines in question.
%%cython -a --cplus
import numpy as np
import cv2
import skimage.morphology as skm
import cython
from libcpp.queue cimport queue
cimport numpy as np
@cython.boundscheck(False)
@cython.wraparound(False)
@cython.nonecheck(False)
@cython.cdivision(True)
#Richard's Fuse Transform
#https://stackoverflow.com/a/51738867/752843
cpdef void FuseTransform(unsigned char [:, :] image):
# set the variable extension types
cdef int c, x, y, nx, ny, width, height, neighbours
cdef queue[int] q
# grab the image dimensions
height = image.shape[0]
width = image.shape[1]
cdef int dx[8]
cdef int dy[8]
#Offsets to neighbouring cells
dx[:] = [-1,-1,0,1,1,1,0,-1]
dy[:] = [0,-1,-1,-1,0,1,1,1]
#Find seed cells: those with only one neighbour
for y in range(1, height-1):
for x in range(1, width-1):
if image[y,x]==0: #Seed cells cannot be blank cells
continue
neighbours = 0
for n in range(0,8): #Looks at all neighbours
nx = x+dx[n]
ny = y+dy[n]
if image[ny,nx]>0: #This neighbour has a value
neighbours += 1
if neighbours==1: #Was there only one neighbour?
q.push(y*width+x) #If so, this is a seed cell
#Starting with the seed cells, gobble up the lines
while not q.empty():
c = q.front()
q.pop()
y = c//width #Convert flat index into 2D x-y index
x = c%width
image[y,x] = 0 #Gobble up this part of the fuse
neighbour = -1 #No neighbours yet
for n in range(0,8): #Look at all neighbours
nx = x+dx[n] #Find coordinates of neighbour cells
ny = y+dy[n]
#If the neighbour would be off the side of the matrix, ignore it
if nx<0 or ny<0 or nx==width or ny==height:
continue
if image[ny,nx]>0: #Is the neighbouring cell active?
if neighbour!=-1: #If we've already found an active neighbour
neighbour=-1 #Then pretend we found no neighbours
break #And stop looking. This is the end of the fuse.
else: #Otherwise, make a note of the neighbour's index.
neighbour = ny*width+nx
if neighbour!=-1: #If there was only one neighbour
q.push(neighbour) #Continue burning the fuse
#Read in image
img = cv2.imread('part.jpg')
ShowImage('Original',img,'bgr')
#Convert image to grayscale
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
#Apply Otsu's method to eliminate pixels of intermediate colour
ret, thresh = cv2.threshold(gray,0,255,cv2.THRESH_OTSU)
#Apply the Fuse Transform
skh_dilated = skelhuman.copy()
FuseTransform(skh_dilated)
Upvotes: 3
Reputation: 10682
In the following algorithm first I normalize the image pixels to have values zeros and ones. Then I examine the 8-connected neighbors of a non-zero pixel by applying a 3x3 unnormalized box-filter. If we multiply (pixelwise) the filter output from the input image, we get all the non-zero pixels, this time, their values tell us how many 8-connected neighbors they have plus 1. So, here the center pixel counts itself as its neighbor.
Red is the center pixel. Yellow are its 8-connected neighborhood.
We should eliminate the result pixel values less than 3.
The code will make things clearer. It may not be very efficient. I didn't try to dig into Richard's code. May be he's doing a similar thing efficiently.
import cv2
import numpy as np
im = cv2.imread('USqDW.png', 0)
# set max pixel value to 1
s = np.uint8(im > 0)
count = 0
i = 0
while count != np.sum(s):
# non-zero pixel count
count = np.sum(s)
# examine 3x3 neighborhood of each pixel
filt = cv2.boxFilter(s, -1, (3, 3), normalize=False)
# if the center pixel of 3x3 neighborhood is zero, we are not interested in it
s = s*filt
# now we have pixels where the center pixel of 3x3 neighborhood is non-zero
# if a pixels' 8-connectivity is less than 2 we can remove it
# threshold is 3 here because the boxfilter also counted the center pixel
s[s < 3] = 0
# set max pixel value to 1
s[s > 0] = 1
i = i + 1
After pruning:
Upvotes: 2