Reputation: 109
I'm trying to calculate the distance between two pixels, but in a specific way. I need to know the thickness of the red line in the image, so my idea was to go through the image by columns, find the coordinates of the two edge points and calculate the distance between them. Do this for the two lines, both top and bottom. Do this for each column and then calculate the average. I should also do a conversion from pixels to real scale.
This is my code for now:
# Make numpy array from image
npimage = np.array(image)
# Describe what a single red pixel looks like
red = np.array([255, 0, 0], dtype=np.uint8)
firs_point = 0
first_find = False
for i in range(image.width):
column = npimage[:,i]
for row in column:
comparison = row == red
equal_arrays = comparison.all()
if equal_arrays == True and first_find == False:
first_x_coord = i
first_find = True
I can't get the coordinates. Can someone help me please? Of course, if there are more optimal ways to calculate it, I will be happy to accept proposals. I am very new! Thank you very much!
Upvotes: 2
Views: 3288
Reputation: 18895
After properly masking all red pixels, you can calculate the cumulative sum per each column in that mask:
Below each red line, you have a large area with a constant value: Below the first red line, it's the thickness of that red line. Below the second red line, it's the cumulative thickness of both red lines, and so on (if there would be even more red lines).
So, now, for each column, calculate the histogram from the cumulative sum, and filter out these peaks; leaving out the 0
in the histogram, that'd be the large black area at the top. Per column, you get the above mentioned (cumulative) thickness values for all red lines. The remainder is to extract the actual, single thickness values, and calculate the mean over all those.
Here's my code:
import cv2
import numpy as np
# Read image
img = cv2.imread('Dc4zq.png')
# Mask RGB pure red
mask = (img == [0, 0, 255]).all(axis=2)
# We check for two lines
n = 2
# Cumulative sum for each column
cs = np.cumsum(mask, axis=0)
# Thickness values for each column
tvs = np.zeros((n, img.shape[1]))
for c in range(img.shape[1]):
# Calculate histogram of cumulative sum for a column
hist = np.histogram(cs[:, c], bins=np.arange(img.shape[1]+1))
# Get n highest histogram values
# These are the single thickness values for a column
tv = np.sort(np.argsort(hist[0][1:])[::-1][0:n]+1)
tv[1:] -= tv[:-1]
tvs[:, c] = tv
# Get mean thickness value
mtv = np.mean(tvs.flatten())
print('Mean thickness value:', mtv)
The final result is:
Mean thickness value: 18.92982456140351
----------------------------------------
System information
----------------------------------------
Platform: Windows-10-10.0.16299-SP0
Python: 3.9.1
NumPy: 1.20.1
OpenCV: 4.5.1
----------------------------------------
EDIT: I'll provide some more details on the "NumPy magic" involved.
# Calculate the histogram of the cumulative sum for a single column
hist = np.histogram(cs[:, c], bins=np.arange(img.shape[1] + 1))
Here, bins
represent the intervals for the histogram, i.e. [0, 1]
, [1, 2]
, and so on. To also get the last interval [569, 570]
, you need to use img.shape[1] + 1
in the np.arange
call, because the right limit is not included in np.arange
.
# Get the actual histogram starting from bin 1
hist = hist[0][1:]
In general, np.histogram
returns a tuple, where the first element is the actual histogram. We extract that, and only look at all bins larger 0
(remember, the large black area).
Now, let's disassemble this code:
tv = np.sort(np.argsort(hist[0][1:])[::-1][0:n]+1)
This line can be rewritten as:
# Get the actual histogram starting from bin 1
hist = hist[0][1:]
# Get indices of sorted histogram; these are the actual bins
hist_idx = np.argsort(hist)
# Reverse the found indices, since we want those bins with the highest counts
hist_idx = hist_idx[::-1]
# From that indices, we only want the first n elements (assuming there are n red lines)
hist_idx = hist_idx[:n]
# Add 1, because we cut the 0 bin
hist_idx = hist_idx + 1
# As a preparation: Sort the (cumulative) thickness values
tv = np.sort(hist_idx)
By now, we have the (cumulative) thickness values for each column. To reconstruct the actual, single thickness values, we need the "inverse" of the cumulative sum. There's this nice Q&A on that topic.
# The "inverse" of the cumulative sum to reconstruct the actual thickness values
tv[1:] -= tv[:-1]
# Save thickness values in "global" array
tvs[:, c] = tv
Upvotes: 2
Reputation: 2525
using opencv:
img = cv2.imread(image_path)
average_line_width = np.average(np.count_nonzero((img[:,:,:]==np.array([0,0,255])).all(2),axis=0))/2
print(average_line_width)
using pil
img = np.asarray(Image.open(image_path))
average_line_width = np.average(np.count_nonzero((img[:,:,:]==np.array([255,0,0])).all(2),axis=0))/2
print(average_line_width)
output in both cases:
18.430701754385964
Upvotes: 1
Reputation: 207345
One way of doing this is to calculate the medial axis (centreline) of the red pixels. And then, as that line is 1px wide, the number of centreline pixels gives the length of the red lines. If you also calculate the number of red pixels, you can easily determine the average line thickness using:
average thickness = number of red pixels / length of red lines
The code looks like this:
#!/usr/bin/env python3
import cv2
import numpy as np
from skimage.morphology import medial_axis
# Load image
im=cv2.imread("Dc4zq.png")
# Make mask of all red pixels and count them
mask = np.alltrue(im==[0,0,255], axis=2)
nRed = np.count_nonzero(mask)
# Get medial axis of red lines and line length
skeleton = (medial_axis(mask*255)).astype(np.uint8)
lenRed = np.count_nonzero(skeleton)
cv2.imwrite('DEBUG-skeleton.png',(skeleton*255).astype(np.uint8))
# We now know the length of the red lines and the total number of red pixels
aveThickness = nRed/lenRed
print(f'Average thickness: {aveThickness}, red line length={lenRed}, num red pixels={nRed}')
That gives the skeleton as follows:
Sample Output
Average thickness: 16.662172878667725, red line length=1261, num red pixels=21011
Upvotes: 0
Reputation: 939
I'm not sure I got it but I used the answer of joostblack to calculcate both average thickness in pixel of both lines. Here is my code with comments:
import numpy as np
## Read the image
img = cv2.imread('img.png')
## Create a mask on the red part (I don't use hsv here)
lower_val = np.array([0,0,0])
upper_val = np.array([150,150,255])
mask = cv2.inRange(img, lower_val, upper_val)
## Apply the mask on the image
only_red = cv2.bitwise_and(img,img, mask= mask)
gray = cv2.cvtColor(only_red, cv2.COLOR_BGR2GRAY)
## Find Canny edges
edged = cv2.Canny(gray, 30, 200)
## Find contours
img, contours, hier = cv2.findContours(edged, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
## Select contours using a bonding box
coords=[]
for c in contours:
x,y,w,h = cv2.boundingRect(c)
if w>10:
## Get coordinates of the bounding box is width is sufficient (to avoid noise because you have a huge red line on the left of your image)
coords.append([x,y,w,h])
## Use the previous coordinates to cut the image and compute the average thickness for one red line using the answer proposed by joostblack
for x,y,w,h in coords:
average_line_width = np.average(np.count_nonzero(only_red[y:y+h,x:x+w],axis=0))
print(average_line_width)
## Show you the selected result
cv2.imshow('image',only_red[y:y+h,x:x+w])
cv2.waitKey(0)
The first one is average 6.34 pixels when the 2nd is 5.94 pixels (in the y axis). If you want something more precise you'll need to change this formula!
Upvotes: 0