Tim L
Tim L

Reputation: 31

Hit or miss morphology in python to find structures in images does not give the required results

I am using the morphology package to create a skeletonized version of the image we are using. We are able to detect the termini in the skeletonized version, but would also like to be able to detect the points where structure makes new branches. We started with trying to detect them with 3 different matrices, named h1,h2,h3. For now we don't have the miss matrix filled for filtering, that is something we will try to add later. We are using 5x5 matrices for ease of editing if we want to try filtering later.

The problem is that even though the pattern in matrix h1 and h2 are present in the skeletonized version, it is not able to detect them. Matrix h3 does work. I can't seem to find why this happens.

def branches(img_pruned,cropped_image):
img = img_pruned
nbranches = 0
branches = cropped_image

h1 = [
    [0,0,0,0,0],
    [0,0,0,1,0],
    [0,1,1,0,0],
    [0,0,1,0,0],
    [0,0,0,0,0]]
h2 = [
    [0,0,0,0,0],
    [0,1,0,1,0],
    [0,0,1,0,0],
    [0,0,1,0,0],
    [0,0,0,0,0]]
h3 = [
    [0,0,0,0,0],
    [0,1,1,1,0],
    [0,0,1,0,0],
    [0,0,0,0,0],
    [0,0,0,0,0]]
m1 = [[0,0,0,0,0],[0,0,0,0,0],[0,0,0,0,0],[0,0,0,0,0],[0,0,0,0,0]]

hitlist = []
misslist = []

for i in range(4):
    hitlist.append(np.rot90(h1, i))
    hitlist.append(np.rot90(h2, i))
    hitlist.append(np.rot90(h3, i))
for t in range(12):
    misslist.append(m1)

for hit,miss in zip(hitlist,misslist):
    branchimg = m.binary_hit_or_miss(img,hit,miss)

for y in range(len(branchimg)):
    for x in range(len(branchimg[y])):
        if branchimg[y][x]:
            nbranches +=1
            branches[y][x] = (0,0,255)

print nbranches
return branches

The original image we took.

image:

The skeletonized photo, we also used pruning to make the termini (ending points of the branches) smoother.

Upvotes: 3

Views: 2964

Answers (1)

DrSandwich
DrSandwich

Reputation: 159

I'm not totally sure about the hit-or-miss - it may have to do with the fact that your h1, h2, and h3 arrays are padded with 0s? Anyway I will answer with how I approach finding branching points, which does not use hit-or-miss.

The basic idea is that any pixel that represents a branch point should have 3 or more neighbors. In contrast, any pixel that is simply along a path should have exactly 2 neighbors.

If you were to only consider orthogonal neighbors as real neighbors, finding all pixels with 3 or more neighbors this would be quite simple. Simply convolve the original image with a kernel K1 that looks like a plus-sign:

0 1 0
1 1 1
0 1 0

and look for any pixels with a value of 4 or greater (3 neighbors + the center pixel). The code would look like this:

import numpy as np
import scipy.ndimage as snd
K1 = snd.generate_binary_structure(2,1)
A = your_binary_image
B = snd.convolve(1*A,K1, mode='constant')
image_of_branch_points = B>=4

(we use mode='constant' in the convolution to avoid the default mirror-like boundary condition giving us false positives on the edge of the dataset... this doesn't seem to be a concern in your dataset since it doesn't go right up to the edge, but be aware) (also another quick note: notice the 1*A in the convolution... if you leave A as a boolean datatype, it won't generate any results greater than 1, so we must convert to int if it's not already).

But you seem to be allowing for diagonal connections / branchings, too. This makes it a little more complicated. My first thought is just use the above code but with a kernel (let's call it K2) of all 1s (i.e. connectivity allowed along diagonals) like so:

1 1 1
1 1 1
1 1 1

However, here is an example of a segment that might look like a branch point based on the above method, but really wouldn't be:

0 0 0 1 0
0 0 0 1 0
0 0 1 1 0
0 1 0 0 0
0 1 0 0 0

The center pixel here would come out as 4 when convolved with K2, yet inspection shows that it is not a branch point. This becomes a bit tricky, then.

One solution is using brute force: march along each branch to the nearest neighbor until two neighbors are found (not counting the pixel you just came from). Then keep walking along those branches for at least 2 more steps until you're sure you have diverged into 2 distinct branches. If so, mark the divergence point as a branch point. I won't go into the brute force solution here but it's a possibility.

Another solution is to use the above method with kernel K2, and manually check each branch point against the original image, discarding any that are visually speaking not a branch point. I imagine the majority of the pixels flagged as a branch point would be correct, and you would only need to discard a few. If you are only processing a few datasets, this might not be a bad idea. But if you are developing code that you intend to process lots of data in the future, this isn't a great solution.

A third solution is to use this convolution method individually with kernels of every possible permutation of a branching structure. You seem to have written a few already (h1, h2, and h3). If you already have every possible branching possibility written up, then this is probably the quickest to implement:

image_of_branch_points = np.zeros_like(A)
list_of_kernels = [h1, h2, h3, ......]
for h in list_of_kernels:
    B = snd.convolve(A,h)
    image_of_branch_points += B

Finally, a fourth solution and the one I favor the most: massage your data so that the kernel K1 will work, i.e. so that all "true" neighbor-pixels are orthogonally connected, not diagonally connected. Here is a rough method that would probably work for doing that. I wish I had your original image so that I could test it though.

Basically by convolving the following two kernels, separately, with the image:

0 0 0    0 0 0
0 0 1    1 0 0
0 1 0    0 1 0

anywhere where the result at the center pixel is 2, you know you are sitting along a diagonal path - and we cover both possible slopes of diagonal path. By writing a 1 to the center pixel in the original image, we fix the diagonal path to have an orthogonal connection (so that, for example, a long diagonal line would become a staircase). Even if the center pixel was already 1, you can overwrite it as 1 again, it doesn't matter. So this code should do it:

import numpy as np
import scipy.ndimage as snd

A = your_binary_image_as_ndarray
A_orthog = A.copy() #this will be the copy we update to have orthogonal neighbors
#Two diagonal kernels
K_diag_upslope = np.array([[0,0,0],[0,0,1],[0,1,0]])
K_diag_downslope = np.array([[0,0,0],[1,0,0],[0,1,0]])
#Convolve each one, one at a time, and save new points to A_orthog
#Note: the variable B is always an intermediary so I overwrite it every time
B = snd.convolve(1*A,K_diag_upslope, mode='constant')
A_orthog[B==2] = 1
B = snd.convolve(1*A,K_diag_downslope, mode='constant')
A_orthog[B==2] = 1

#Now A_orthog should be ready for the method shown in my first bit of code up above

K1 = snd.generate_binary_structure(2,1)
B = snd.convolve(1*A_orthog,K1, mode='constant')
image_of_branch_points = B>=4

I can't guarantee it will work in every case, but it worked in simple test cases for me - for example, the procedure to obtain A_orthog applied to an identity matrix np.identity(5) yielded this:

1, 0, 0, 0, 0
1, 1, 0, 0, 0
0, 1, 1, 0, 0
0, 0, 1, 1, 0
0, 0, 0, 1, 1

You should of course keep the original A around - A_orthog is just an intermediary for finding the branch points, and image_of_branch_points should still correspond to the correct branch points on A.

Please do let me know if this works for you. Like I said I don't have the proper test data to stress-test this idea on the many possibilities that might lurk within your image, but I find the problem quite interesting. I've dealt with this before myself, but only in systems with all-orthogonal neighbors.

edit:

I grabbed your jpg image and thresholded it to get a binary image to test my code. It mostly works, thought it does give some clusters of more than one branch point - not unreasonably, in places where there is a whole mess of branches. JPG isn't the best format for a binary data so I'm also not 100% sure if there were any JPG artifacts after I thresholded. Anyway here's the result I got, plotted like so:

Abranch = 120*A + 240*image_of_branch_points

import matplotlib.pyplot as plt

cmap = plt.cmap.OrRd
cmap.set_under(color='black')
plt.imshow(Abranch, interpolation='none', cmap=cmap, vmin=0.01)

result of main code

Upvotes: 2

Related Questions