user11719635
user11719635

Reputation:

how do I determine the locations of the points after perspective transform, in the new image plane?

I'm using OpenCV+Python+Numpy and I have three points in the image, I know the exact locations of those points.

(P1, P2);
 N1

I am going to transform the image to another view, (for example I am transforming the perspective view to side view). If I do so I will not have the exact location of those three points in the image plane. I should write the code in a way that I can get new coordinates of those points.

   pts1=np.float32([[867,652],[1020,580],[1206,666],[1057,757]]) 

   pts2=np.float32([[700,732],[869,754],[906,916],[712,906]])


   matrix=cv2.getPerspectiveTransform(pts1,pts2)


   result=cv2.warpPerspective(Image1,matrix,(1920,1080))

   cv2.imshow('Image',Image1) cv2.imshow('Tran',result)

My question is: How can I determine the new locations of those 3 points?

Upvotes: 8

Views: 8373

Answers (2)

Leonardo Mariga
Leonardo Mariga

Reputation: 1162

Easy, you can look in the documentation how warpPerspective works. To transform the location of a point you can use the following transformation:

enter image description here

Where [x, y] is the original point, and M is your perspective matrix

Implementing this in python you can use the following code:

p = (50,100) # your original point
px = (matrix[0][0]*p[0] + matrix[0][1]*p[1] + matrix[0][2]) / ((matrix[2][0]*p[0] + matrix[2][1]*p[1] + matrix[2][2]))
py = (matrix[1][0]*p[0] + matrix[1][1]*p[1] + matrix[1][2]) / ((matrix[2][0]*p[0] + matrix[2][1]*p[1] + matrix[2][2]))
p_after = (int(px), int(py)) # after transformation

You can see the result in a code below. The red dot is your original point. The second figure shows where it went after the perspective transform. The blue circle is the point you calculated in formula above.

blue

You can have a look in my Jupyter Notebook here or here.

The code:

import numpy as np
import cv2
import matplotlib.pyplot as plt

# load the image, clone it for output, and then convert it to grayscale
image = cv2.imread('sample.png')
pts1=np.float32([[867,652],[1020,580],[1206,666],[1057,757]]) 
pts2=np.float32([[700,732],[869,754],[906,916],[712,906]])
matrix=cv2.getPerspectiveTransform(pts1,pts2)

# Draw the point
p = (50,100)
cv2.circle(image,p, 20, (255,0,0), -1)

# Put in perspective
result=cv2.warpPerspective(image,matrix,(1500,800))

# Show images
plt.imshow(image)
plt.title('Original')
plt.show()

plt.imshow(result)
plt.title('Distorced')
plt.show()

# Here you can transform your point
p = (50,100)
px = (matrix[0][0]*p[0] + matrix[0][1]*p[1] + matrix[0][2]) / ((matrix[2][0]*p[0] + matrix[2][1]*p[1] + matrix[2][2]))
py = (matrix[1][0]*p[0] + matrix[1][1]*p[1] + matrix[1][2]) / ((matrix[2][0]*p[0] + matrix[2][1]*p[1] + matrix[2][2]))
p_after = (int(px), int(py))

# Draw the new point
cv2.circle(result,p_after, 20, (0,0,255), 12)

# Show the result
plt.imshow(result)
plt.title('Predicted position of your point in blue')
plt.show()

Upvotes: 21

Dima Mironov
Dima Mironov

Reputation: 575

Have a look a the documentation, but in general:

cv2.perspectiveTransform(points, matrix)

For example:

# note you need to add a new axis, to match the supposed format
cv2.perspectiveTransform(pts1[np.newaxis, ...], matrix)
# returns array equal to pts2

Upvotes: 2

Related Questions