Reputation: 1734
In my project, I get points in a larger image of a territory I want to crop, I transform it using perspectiveTransform
and warpPerspective
methods and retrieve a new image, transform it to a rectangle. In that new image I find specific points (x,y) and want to transform them backwards to original image, relatively to its perspective. For that purpose I used the same warpPerpective
with the same arguments, but with a flag WARP_INVERSE_MAP
. However, it returns me a large array of arrays of zeros.
How can I make inverted transformation not for an image, but for a point (in python)?
This is the code I use:
p = (123, 234)
p_array = np.array([[p[0], p[1]]], dtype=np.float32)
matrix = cv2.getPerspectiveTransform(points, output_points)
transformed_points = cv2.warpPerspective(p_array, matrix, table_image_size, cv2.WARP_INVERSE_MAP)
Where:
points- 4 points which are to be cropped from an original image,
output_points- 4 points of where cropped image has to be placed into new image
table_image_size- horizontal and vertical size of an image
These parameters are in forward transformation and in this inverse transformation are identical, the only difference is that instead of an image to crop from I insert p_array in warpPerspective
method.
Upvotes: 3
Views: 8083
Reputation: 670
From warpPerspective documentation:
dsize: size of the output image.
use transformed_points = cv2.warpPerspective(p_array, matrix, (2,1), cv2.WARP_INVERSE_MAP)
Upvotes: 2