user3004528
user3004528

Reputation: 21

Mapping Depth pixels to color pixels

I am new to kinect. Can anyone tell me how I can map depth pixels to color pixels?
I found this sample code:

P3D.x = (x_d - cx_d) * depth(x_d,y_d) / fx_d

P3D.y = (y_d - cy_d) * depth(x_d,y_d) / fy_d

P3D.z = depth(x_d,y_d)

P3D' = R.P3D + T

P2D_rgb.x = (P3D'.x * fx_rgb / P3D'.z) + cx_rgb

P2D_rgb.y = (P3D'.y * fy_rgb / P3D'.z) + cy_rgb

I am working without kinect ,when I am using this code in my c++ program I am getting lot of errors.

Thanks in advance.

Upvotes: 1

Views: 2346

Answers (3)

doizuc
doizuc

Reputation: 408

You can also use OpenKinect/freenect. There is wrappers for C++ and OpenCV which works pretty well. If you want to test your code without kinect you can use fakenect which is included in libfreenect.

Regarding the calibration and camera to real world coordinate, the easiest way is to use the default parameters of freenect to align color and depth images using the FREENECT_DEPTH_REGISTERED flag. If you choose to use this option you will need to hack fakenect replacing all the FREENECT_DEPTH_11BIT by FREENECT_DEPTH_REGISTERED flag which is ever done here.

This function void freenect_camera_to_world(freenect_device* dev, int cx, int cy, int wz, double* wx, double* wy); will allow you to map color pixels with depth pixels using the registered mode. If you want to do the opposite(map depth pixels with color pixels) there is good explanations over there :

http://nicolas.burrus.name/index.php/Research/KinectCalibration http://labs.manctl.com/rgbdemo/index.php/Documentation/KinectCalibrationTheory

Upvotes: 2

Angelos B
Angelos B

Reputation: 96

You can easily map pixels from Depth Frames to pixels from Color Frames by reading the U,V texture mapping parameters using the Kinect SDK. For every pixel coordinate (i,j) of the Depth frame D(i,j) the corresponding pixel coordinate of the Color Frame is given by (U(i,j),V(i,j)) so the color is given by C(U(i,j),V(i,j)).

The U,V functions are contained in the hardware of each Kinect and they differ from Kinect to Kinect since the Depth cameras are differently aligned with respect to the Video cameras due to tiny differences when glued on the hardware board at the factory. But you don't have to worry about that if you read U,V from the Kinect SDK.

Below I give you an image example and an actual source code example using the Kinect SDK in Java with the J4K open source library (you can do something similar on your own in C/C++):

public class Kinect extends J4KSDK{

    VideoFrame videoTexture; 

public Kinect() { 
    super(); 
    videoTexture=new VideoFrame(); 
}

@Override 
public void onDepthFrameEvent(short[] packed_depth, int[] U, int V[]) { 
    DepthMap map=new DepthMap(depthWidth(),depthHeight(),packed_depth); 
    if(U!=null && V!=null) map.setUV(U,V,videoWidth(),videoHeight()); 
} 

@Override 
public void onVideoFrameEvent(byte[] data) {     
    videoTexture.update(videoWidth(), videoHeight(), data); 
} }

Image example showing 3 different perspectives of the same Depth-Video aligned frame: enter image description here

I hope that this helps you!

Upvotes: 0

jaho
jaho

Reputation: 4992

Use CoordinateMapper provided in Kinect SDK.

http://msdn.microsoft.com/en-us/library/microsoft.kinect.coordinatemapper_methods.aspx

Upvotes: 0

Related Questions