badcoder
badcoder

Reputation: 45

When Using estimateRigidTransform() in C++ with OpenCV, Rotation is Correct but Translation is Incorrect

I have two sets of points from an image that I am trying to transform between. To do this, I am using OpenCV's function estimateRigidTransform(). With the homography matrix that is produced (containing the rotation matrix and the translation vector), I am using OpenCV's function warpAffine() to transform the image. I am then displaying this transformed image in a new window. My code is as follows:

cv::namedWindow("Transformed Blue A", CV_WINDOW_AUTOSIZE);
cv::Mat mask_image, warp_matrix;
bool fullAffine = false;

homography_matrix = estimateRigidTransform(black_A_points, blue_A_points, fullAffine);
warpAffine(image, mask_image, homography_matrix, image.size());
cv::imshow("Transformed Blue A", mask_image);

black_A_points and blue_A_points are vectors with four Point2f values in them (these are the coordinates that are transformed between). After transforming with warpAffine(), I am displaying the transformed image in a new window.

The results are as follows: Image to be Transformed

Transformed Image

I am using the corners of the 'A's as the feature points to translate between (hence why the have red lines and green dots drawn on them for visual confirmation that I have found these points correctly). I manually shifted the image over in the window so I could see it better and the rotation is visually correct. However, I was expecting to see the blue 'A' where the black 'A' was in the window showing the original image (translated to that position).

The homography matrix is: [-0.7138494567933336, 0.7193648090910907, 40.48675760211488; -0.7193648090910907, -0.7138494567933336, 849.4044159834291]

Therefore the rotation matrix is: [-0.7138494567933336, 0.7193648090910907; -0.7193648090910907, -0.7138494567933336]

and the translation vector is: [40.48675760211488; 849.4044159834291]

Am I using the homography matrix correctly? Do I need to be performing a mathematical operation on the homography matrix before I can use it in the coordinates of the window (as in the current coordinate frame is wrong)? Or am I using the OpenCV functions wrong?

I also tried the OpenCV functions findHomography() and getAffineTransform() but both of these produced the same problem.

Thank you very much for your time. I appreciate any help.

UPDATE:

Corners of Black A:

[(495, 515), (479, 497), (428, 646), (345, 565)]

Corners of Blue A:

[(57, 125), (57, 151), (200, 80), (200, 198)]

Upvotes: 1

Views: 878

Answers (1)

api55
api55

Reputation: 11420

After testing it with your numbers I found the problem :)

You are using points as (y,x) and not as (x,y). I tried with your original numbers and I reproduce the same results as you did. Then I did a small script to test it in python, inverting the coordinates:

import numpy as np
import cv2

# loads iamge and data
img = cv2.imread("test.png")
pointsBlack = np.array([(495, 515), (479, 497), (428, 646), (345, 565)])
pointsBlue = np.array([(57, 125), (57, 151), (200, 80), (200, 198)])

# inverts the points (y,x) is (x,y)
a = np.array([(x[1], x[0]) for x in pointsBlack])
b = np.array([(x[1], x[0]) for x in pointsBlue])

res = cv2.estimateRigidTransform(a, b, True)
print(res)

imgWarped = cv2.warpAffine(img, res, img.shape[1::-1])
cv2.imshow("warped", imgWarped)
cv2.imshow("s", img)
cv2.waitKey(0)

The result of this is:

[[-7.80571429e-01 -7.46857143e-01  8.96688571e+02]
 [ 6.53714286e-01 -7.35428571e-01  8.43742857e+01]]

and the image looks like:

enter image description here

In C++ cv::Point2f constructor is cv::Point2f(x, y). You are passing (y,x). Not sure how you find this points, but it could be a confusion with the cv::Mat::at<T>(row, col) function which takes first rows and the cols, or in cartesian, y and then x.

Upvotes: 1

Related Questions