michezio
michezio

Reputation: 21

Generate a PointCloud object in Open3D from RealSense data

I'm trying to convert data captured from an Intel RealSense device into an Open3D PointCloud object that I then need to process. For the moment I only have the rosbag sample files to work with, but I think a similar procedure should be used with the direct stream from the device. So far I managed to read and display the files using pyrealsense2 and OpenCV.

Now I'm trying to use Open3D. For example (using the Structured Light Short-Range Example):

import open3d as o3d

bag_reader = o3d.t.io.RSBagReader()
bag_reader.open("structured.bag")

rgbd_image = bag_reader.next_frame()
pcd = o3d.geometry.PointCloud.create_from_rgbd_image(
    rgbd_image,
    o3d.camera.PinholeCameraIntrinsic(
        o3d.camera.PinholeCameraIntrinsicParameters.PrimeSenseDefault)))

This results in an error saying:

TypeError: create_from_rgbd_image(): incompatible function arguments.

So I tried to create a Open3D.geometry.RGBDImage manually with the "correct" format:

import numpy as np

raw_rgb = np.array(rgbd_image.color)
raw_depth = np.array(rgbd_image.depth)
new_rgbd_image = o3d.geometry.RGBDImage.create_from_color_and_depth(
    o3d.geometry.Image(raw_rgb), o3d.geometry.Image(raw_depth))

Now there is no TypeError when creating the PointCloud, but it's completely wrong, in fact it shows like this:

o3d.visualization.draw_geometries([pcd])

Front View Side View

When in reality it should look something like this (notice I'm just elaborating the first frame when actually the rosbag file is a video):

Correct

I also tried to mess around with the intrinsic and extrinsic matrixes using the parameters extracted from the rosbag file but still the point cloud looks really messed up.

What am I doing wrong? What's the way to correctly recreate a point cloud in Open3D from RealSense data?

Upvotes: 1

Views: 1709

Answers (1)

09th
09th

Reputation: 1

For those who encounter the same problem: The point is that functions from different branches are used here, one from Legacy, the other from Tensor. In this case used o3d**.t.**io + o3d.geometry. You need to use either o3d.io + o3d.geometry, or o3d.t.io + o3d.t.geometry. Another option is using to_legacy()/from_legacy() with Tensor data.

Upvotes: 0

Related Questions