nickponline
nickponline

Reputation: 25914

Why do triangulated points not project back to same image points in OpenCV?

I have two corresponding image points (2D) visualized by the same camera with intrinsic matrix K each coming from different camera poses (R1, t1, R2, t2). If I triangulate the corresponding image points to a 3D point and then reproject it back to the original cameras it only closely matches the original image point in the first camera. Can someone help me understand why? Here is a minimal example showing the issue:

import cv2
import numpy as np

# Set up two cameras near each other

K = np.array([
    [718.856 ,   0.  ,   607.1928],
    [  0.  ,   718.856 , 185.2157],
    [  0.  ,     0.   ,    1.    ],
])

R1 = np.array([
    [1., 0., 0.],
    [0., 1., 0.],
    [0., 0., 1.]
])

R2 = np.array([
    [ 0.99999183 ,-0.00280829 ,-0.00290702],
    [ 0.0028008  , 0.99999276, -0.00257697],
    [ 0.00291424 , 0.00256881 , 0.99999245]
])

t1 = np.array([[0.], [0.], [0.]])

t2 = np.array([[-0.02182627], [ 0.00733316], [ 0.99973488]])

P1 = np.hstack([R1.T, -R1.T.dot(t1)])
P2 = np.hstack([R2.T, -R2.T.dot(t2)])

P1 = K.dot(P1)
P2 = K.dot(P2)

# Corresponding image points
imagePoint1 = np.array([371.91915894, 221.53485107])
imagePoint2 = np.array([368.26071167, 224.86262512])

# Triangulate
point3D = cv2.triangulatePoints(P1, P2, imagePoint1, imagePoint2).T
point3D = point3D[:, :3] / point3D[:, 3:4]
print(point3D)

# Reproject back into the two cameras
rvec1, _ = cv2.Rodrigues(R1)
rvec2, _ = cv2.Rodrigues(R2)

p1, _ = cv2.projectPoints(point3D, rvec1, t1, K, distCoeffs=None)
p2, _ = cv2.projectPoints(point3D, rvec2, t2, K, distCoeffs=None)

# measure difference between original image point and reporjected image point 

reprojection_error1 = np.linalg.norm(imagePoint1 - p1[0, :])
reprojection_error2 = np.linalg.norm(imagePoint2 - p2[0, :])

print(reprojection_error1, reprojection_error2)

The reprojection error in the first camera is always good (< 1px) but the second one is always large.

Upvotes: 3

Views: 2129

Answers (2)

Merwanski
Merwanski

Reputation: 11

If someone else is looking for a concrete example on how to use the triangulation function of cv2 have a look at this post on GitHub

# -----------------------------------------------------------------------------
# Test
# -----------------------------------------------------------------------------

print('Triangulate 3d points - units in meters')
# triangulatePoints requires 2xn arrays, so transpose the points
p = cv2.triangulatePoints(P1, P2, x1.T, x2.T)
# however, homgeneous point is returned
p /= p[3]
print('Projected point from openCV:',  p.T)

p = triangulate_nviews([P1, P2], [x1h, x2h])
print('Projected point from 2 camera views:',  p)

p = triangulate_nviews([P1, P2, P3], [x1h, x2h, x3h])
print('Projected point from 3 camera views:',  p)

# cv2 two image points - not homgeneous on input
p = cv2.triangulatePoints(P1, P2, x1h2[:, :2].T, x2h2[:, :2].T)
p /= p[3]
print('Projected points from openCV:\n', p.T)

p = triangulate_points(P1, P2, x1h2, x2h2)
print('Projected point from code:\n',  p)

code in gist

Upvotes: 0

rayryeng
rayryeng

Reputation: 104565

Remember how you're constructing the projection matrix with the transpose of the rotation matrix combined with the negative of the translation vector. You must do the same thing when you're putting this into cv2.projectPoints.

Therefore, take the transpose of the rotation matrix and put it into cv2.Rodrigues. Finally, supply the negative of the translation vector into cv2.projectPoints:

# Reproject back into the two cameras
rvec1, _ = cv2.Rodrigues(R1.T) # Change
rvec2, _ = cv2.Rodrigues(R2.T) # Change

p1, _ = cv2.projectPoints(point3D, rvec1, -t1, K, distCoeffs=None) # Change
p2, _ = cv2.projectPoints(point3D, rvec2, -t2, K, distCoeffs=None) # Change

Doing this we now get:

[[-12.19064      1.8813655   37.24711708]]
0.009565768222768252 0.08597237597736622

To be absolutely sure, here are the relevant variables:

In [32]: p1
Out[32]: array([[[371.91782052, 221.5253794 ]]])

In [33]: p2
Out[33]: array([[[368.3204979 , 224.92440583]]])

In [34]: imagePoint1
Out[34]: array([371.91915894, 221.53485107])

In [35]: imagePoint2
Out[35]: array([368.26071167, 224.86262512])

We can see that the first few significant digits match and we expect that there is a slight loss in precision due to this being a least-squares solve to where the points triangulate to.

Upvotes: 3

Related Questions