AlfredH
AlfredH

Reputation: 1

Loss of scale from AruCo poses to estimate Camera extrinsics?

Simply put, I am trying to estimate camera poses from pictures of a table with an aruco marker in the middle of it using the openCV library. 20 pictures are taken with 360/20 degree increments . As far as I understand, estimatePoseSingleMarker gives me the pose of the marker relative to the camera. Therefore I invert the pose of the aruco marker by saying: R_cam = R_marker^T, tvec_cam = -R_marker^T*tvec_marker where ^T signifies the transpose operation. However, when I compare the estimated poses to the true poses (which are taken directly from the camera parameters in Blender) the cameras seem to be positioned further apart and also sit lower in the z-direction as compared to the marker. Attached is the plot showing this.Red points are the true poses and the green points are the estimated ones. The square at the bottom is simply the corners of the marker. What may be the cause of this? Perhaps loss of scale information?

Upvotes: 0

Views: 301

Answers (1)

AlfredH
AlfredH

Reputation: 1

So the solution was pretty simple, it was a fundamental flaw in the estimation of the intrinsics matrix. The rendered images from Blender did not have a perfect pinhole camera, instead the camera I had used had different fx and fy (focal length in x and y). Simply changing these to be the same led to near perfect estimations.

Upvotes: 0

Related Questions