Reputation: 1038
I have processed an image using openCV to obtain the image pattern. The image pattern is represented by 2 Python lists of horizontal and vertical lines respectively. The lines represent the borders of the patterns.
fx = horizontal lines
fy = vertical lines
Each list is arranged in order based on the distance from the top left corner of the image. Next, I use the following to calculate the intersection points of those discovered lines:
def get_corners(fx,fy):
corners = []
for x_line in fx:
for y_line in fy:
corner = get_intersection(x_line,y_line)
if corner is not None:
corners.append(corner)
This should give me the corners
(formatted: (x,y)
) in order from left to right, top to bottom. Now I want to use those coordinates to crop rectangles out of the image.
The size of the corners
list varies and the patterns stack, meaning they have points in common. Given the list of points, the and the size of the lists of lines fx
and fy
:
How do I use the points to crop the rectangles?
Feel free to change get_corners()
if you need.
Here's an example: The pattern detection yields 4 possible rectangles in a 2x2. This means list points
has a total of 9 values in it.
Points: [[],[],[],
[],[],[],
[],[],[]]
I am able to crop the first rectangle using something like this:
x1,y1 = points[0] #top left corner of the first pattern
x2,y2 = points[5] #bottom right corner of the first pattern
#rectangle
rectange = img[y1:y2,x1:x2]
Upvotes: 3
Views: 2062
Reputation: 2086
Your question is a bit unclear, but here's a solution that assumes your img
variable is a numpy array (since the question is tagged with opencv
) with intersection point indices in corners
. Note that I modified the get_corners()
function to build the corners in rows instead of a single flat array for easier processing, as you indicated this was ok.
import numpy as np
def get_corners(fx, fy):
corners = []
for x_line in fx:
row = [] # NOTE: we're building rows here!
for y_line in fy:
corner = get_intersection(x_line, y_line)
if corner is not None:
row.append(corner)
corners.append(row)
def get_crops(img, corners):
crops = []
for i, row in enumerate(corners[0:-1]):
for j in range(len(row) - 1):
x1, y1 = row[j]
next_row = corners[i+1]
x2, y2 = next_row[j+1]
# this slicing works with my test_img,
# but you may need to adjust it for yours
crops.append(img[x1:x2+1, y1:y2+1])
return crops
test_corners = [
[ [0, 0], [0, 1], [0, 2] ],
[ [1, 0], [1, 1], [1, 2] ],
[ [2, 0], [2, 1], [2, 2] ],
]
test_img = np.array(corners) # test img to easily see indices
crops = get_crops(test_img, test_corners)
for i, crop in enumerate(crops):
print("crop [{}]: {}\n".format(i, crop))
Here's the output of the test run. Of course, a real image would have other data, but this shows how to do the slicing for my test_img
numpy array.
crop [0]: [[[0 0] [0 1]]
[[1 0] [1 1]]]
crop [1]: [[[0 1] [0 2]]
[[1 1] [1 2]]]
crop [2]: [[[1 0] [1 1]]
[[2 0] [2 1]]]
crop [3]: [[[1 1] [1 2]]
[[2 1] [2 2]]]
Upvotes: 1
Reputation: 2337
Stabbing in the dark here, but is this your intermediate image? I'm also assuming you're differentiating between squares and rectangles, that is, you don't want squares, only rectangles.
If this is the case, I'd use the following steps:
cnt_rectangles = 0
rectangle_list = []
for index in np.arange(len(points)-1):
p = points[index]
q = points[index+1]
if (p[0] == q[0]) || p[1] == q[1]
#the rectangle vertices must not have the same column or row. reject.
continue
else if abs(p[0] - q[0]) == abs(p[1] - q[1]):
#this is a square. reject
continue
else:
#this is a rectangle
cnt_rectangels+=1
rectangle_list.append((p, q))
Upvotes: 5