Reputation: 322
I have a fish-eye camera in the ceiling and I want to locate some points on the floor. I have put the origin of my reference system (real world) just below the camera and I want to know the position of every object in centimeters. This picture shows this:
Firstly, I have done the camera calibration and I have obtained the next result with an RMS of 1.11:
Undistorted image after calibration
As a result of the calibration I obtained intrinsic parameters (camera matrix), so I used cv::solvePnP to get rotation and translation vectors. For apply this I marked some points in the undistorted image (in pixels) and I measured them in real world according to my reference system.
For example, the origin is in the center of a 1024x768 image, so:
The next code shows this:
std::vector<cv::Point2f> imagePointsPix;
std::vector<cv::Point3f> objectPointsCm;
imagePointsPix.push_back(cv::Point2f(512.,384.));
imagePointsPix.push_back(cv::Point2f(404.,512.));
imagePointsPix.push_back(cv::Point2f(666.,211.));
imagePointsPix.push_back(cv::Point2f(519.,66.));
objectPointsCm.push_back(cv::Point3f(0., 0., 0.));
objectPointsCm.push_back(cv::Point3f(-80.,-132.,0.));
objectPointsCm.push_back(cv::Point3f(120.,188.,0.));
objectPointsCm.push_back(cv::Point3f(-40.,268.,0.));
cv::Mat rvec(1,3,cv::DataType<double>::type);
cv::Mat tvec(1,3,cv::DataType<double>::type);
cv::Mat rotationMatrix(3,3,cv::DataType<double>::type);
cv::solvePnP(objectPointsCm, imagePointsPix, cameraMatrix, distCoeffs, rvec, tvec, 0, SOLVEPNP_ITERATIVE);
cv::Rodrigues(rvec,rotationMatrix);
Now I have the camera matrix, the rotation matrix and the traslation vector, so by using this as reference I am able to compute any point if I have its position in pixels. This is the code:
cv::Mat uvPoint = cv::Mat::ones(3,1,cv::DataType<double>::type); //u,v,1
uvPoint.at<double>(0,0) = 512.; //img point for which we want its real coordinates
uvPoint.at<double>(1,0) = 384.;
cv::Mat tempMat, tempMat2;
double s;
tempMat = rotationMatrix.inv() * cameraMatrix.inv() * uvPoint;
tempMat2 = rotationMatrix.inv() * tvec;
s = 0 + tempMat2.at<double>(2,0); //before 0 it was 285, which represents the height Zconst
s /= tempMat.at<double>(2,0);
std::cout << "P = " << rotationMatrix.inv() * (s * cameraMatrix.inv() * uvPoint - tvec) << std::endl;
I get this results for the same points I used for obtaining my parameters:
The rest of points also show an error too big... I have used more points but the results do not improve. I don't know where I went wrong, could anyone help me?
Thanks in advance.
Upvotes: 3
Views: 2147
Reputation: 2086
It looks like you may be effectively undistorting your image twice from solvePNP
's perspective. This is due to passing in the distortion coefficients along with point correspondences that are already derived from an undistorted image.
Try passing the actual camera matrix from your calibration to solvePNP
instead of an identity matrix, but still pass NULL for the distortion coefficients to avoid the double-undistortion.
Upvotes: 1
Reputation: 322
Finally I have found out that the error was caused by the distortion coefficients, i.e. my calibration. I set the cameraMatrix to the Identity matrix (eye(3)) and the distCoefficients to NULL so that solvePNP assumed I have a perfect camera. Using this approach I obtained an error much lower. I will have to make a better calibration.
Upvotes: 0