Reputation: 10996
I have a small confusion regarding the use of the solvePnP function in OpenCV.
I have the matrix for the intrinsic parameters of the camera and I have identified some keypoints in image and I am trying to estimate the extrinsic parameters for the calibration.
The documentation for solvePnP says:
cv2.solvePnP(objectPoints, imagePoints, cameraMatrix,
distCoeffs[, rvec[, tvec[, useExtrinsicGuess[, flags]]]]) → retval, rvec, tvec
I am guesisng my imagePoints
parameters are the keypoints I have detected. and these arte specified in pixels as (x1, y1), (x2, y2), (x3, y3)
.
I am totally confused about the objectPoints. So, the documentation says:
objectPoints – Array of object points in the object coordinate space,
3xN/Nx3 1-channel or 1xN/Nx1 3-channel, where N is the number of points.
vector<Point3f> can be also passed here.
How can I generate these object points from my image points? What does it mean when they say the object coordinate space in here?
Upvotes: 3
Views: 12738
Reputation: 22954
The cv2.solvePnP()
method is generally used in pose estimation, or in other words, it can be used to estimate the orientation of a 3D object in a 2D image. So for this you need to tag some key-points in the 3D model of the object(objectPoints
) and also detect those key-points in the 2D image(imagePoints
).
You can consult this answer, to get the application of this technique for face pose estimation.
Upvotes: 4