George Profenza
George Profenza

Reputation: 51837

How to efficiently map a single ColorSpacePoint to a CameraPoint using the Kinect SDK?

I'm trying to simply get a 3D CameraSpacePoint position for a given 2D ColorSpacePoint from the RGB stream.

Looking at the CoordinatorMapper Methods, the only methods for mapping from color coordinates are:

MapColorFrameToDepthSpace Maps a frame from color space to depth space. MapColorFrameToDepthSpaceUsingIBuffer Maps a frame from color space to depth space. MapDepthFrameToCameraSpace Maps a frame from depth space to camera space. MapDepthFrameToCameraSpaceUsingIBuffer Maps a frame from depth space to camera space.

The issue is that on my older PC when I try to use MapColorFrameToDepthSpace speed drops from ~33fps to ~10fps. I'm guessing it takes a while to convert 1920x1080 points from 2D to 3D, but I wish there's a faster way since I need to convert a single point for my application. Even the SDK samples (both c++ and c#) run at ~1fps for the Color to Camera conversion demos.

Even if I use MapDepthFrameToCameraSpace then MapDepthPointToCameraSpace that still converts the whole depth frame to camera space when I only need one point.

Is there a way to convert a single ColorSpacePoint to a CameraPoint ? If so how ? Otherwise how could I speed-up ColorSpace to CameraSpace mapping ?

Are there any other SDKs (libfreenect2, etc.) that offer a more efficient method of retrieving the depth for a position on the color stream ?

Upvotes: 0

Views: 440

Answers (1)

nikosm
nikosm

Reputation: 106

I have used open source library PCL for use with RGDB cameras (intel realsence though). It might be worth checking out. I'm not sure quite how well it would work with the limitations of your laptop though.

Upvotes: 0

Related Questions