Reputation: 11
I have two separate pointclouds(type= sensor_msgs/PointCloud2
) from two different sensors, a 3D stereo camera and a 2D LiDAR. I wanted to know how can I fuse these two pointclouds if the stereo pointcloud is 3D with fix length and a 2D LiDAR pointcloud with variable pointcloud length?
If someone has worked on it please help me, your help will be highly appreciated. Thanks
Upvotes: 1
Views: 707
Reputation: 1915
I studied this in my research.
The first is you have to calibrate 2 sensors to know their extrinsic. There are a few open source packages you can play with which I listed Below
The Second is fuse the data. The simple way is just based on calibration transform and use the tf to send. The complicated way is to deply pipelines such as depth image to LIDAR alignment and depth map variance estimation and fusion. You can choose to do it ez way like easiler landmark included EKF estimation, or you can follow CMU Zhangji`s Visual-LIDAR-Inertial fusion work for the direct 3D feature to LIDAR alignment. The choice is urs
(1) http://wiki.ros.org/velo2cam_calibration
Guindel, C., Beltrán, J., Martín, D. and García, F. (2017). Automatic Extrinsic Calibration for Lidar-Stereo Vehicle Sensor Setups. IEEE International Conference on Intelligent Transportation Systems (ITSC), 674–679.
Pros. Pretty accurate and ez to use package. Cons. you have to made rigid cut board.
(2) https://github.com/ankitdhall/lidar_camera_calibration
LiDAR-Camera Calibration using 3D-3D Point correspondences, arXiv 2017
Pros. Ez to use, Ez to make the hardware. Cons May is not so accurate
There were couple of others I listed In my thesis, I`ll go back and check for it and update here. If I remember
Upvotes: 1