sanket
sanket

Reputation: 21

LiDAR to camera image fusion

I want to fuse LiDAR {X,Y,Z,1} points on camera image {u,v} for which we have LiDAR points, camera matrix (K), distortion coefficient (D), position of camera and LiDAR (x,y,z), rotation of camera and LiDAR (w+xi+yj+zk). There are three coordinates system involved. Vehicle axle coordinate system(X:forward, Y:Left, Z: upward), LiDAR coordinate (X:Right, Y:Forward, Z: Up) and camera coordinate system (X: Right, Y:down, Z: Forward). I tried the below approach but the points are not fusing properly. All points are wrongly plotted.

Coordinate system:

For given Rotation and Position of camera and LiDAR we compute the translation using below equation.

t_lidar     = R_lidar  * Position_lidar^T
t_camera    = R_camera  *Position_camera^T

Then relative rotation and translation is computed as flows

R_relative = R_camera^T * R_lidar
t_relative = t_lidar -  t_camera

Then the final Transformation Matrix and point transformation between LiDAR Points [X,Y,Z,1] and image frame [u,v,1] is given by:

T =  [ R_relative | t_relative ] 
  [u,v,1]^T = K * T * [X,Y,Z,1]^T

Is there anything which I am missing?

Upvotes: 0

Views: 1013

Answers (1)

Dr Yuan Shenghai
Dr Yuan Shenghai

Reputation: 1915

Use opencv projectpoint directly

https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#projectpoints

enter image description here

C++: void projectPoints(InputArray objectPoints, InputArray rvec, InputArray tvec, InputArray cameraMatrix, InputArray distCoeffs, OutputArray imagePoints, OutputArray jacobian=noArray(), double aspectRatio=0 )

objectPoints – Array of object points, 3xN/Nx3 1-channel or 1xN/Nx1 3-channel (or vector ), where N is the number of points in the view.

rvec – Rotation vector. See Rodrigues() for details.

tvec – Translation vector.

cameraMatrix – Camera matrix

Upvotes: 1

Related Questions