W.laarakkers
W.laarakkers

Reputation: 21

Merging rgb and depth images from a kinect

I'm creating a vision algorithm that is implemented in a Simulink S-function( which is c++ code). I accomplished every thing wanted except the alignment of the color and depth image.

My question is how can i make the 2 images correspond to each other. in other words how can i make a 3d image with opencv.

I know my question might be a little vague so i will include my code which will explain the question

#include "opencv2/opencv.hpp"

using namespace cv;

int main(int argc, char** argv)
{
// reading in the color and depth image 
Mat color = imread("whitepaint_col.PNG", CV_LOAD_IMAGE_UNCHANGED);
Mat depth = imread("whitepaint_dep.PNG", CV_LOAD_IMAGE_UNCHANGED);

// show bouth the color and depth image
namedWindow("color", CV_WINDOW_AUTOSIZE);
imshow("color", color);
namedWindow("depth", CV_WINDOW_AUTOSIZE);
imshow("depth", depth);

// thershold the color image for the color white
Mat onlywhite;
inRange(color, Scalar(200, 200, 200), Scalar(255, 255, 255), onlywhite);

//display the mask
namedWindow("onlywhite", CV_WINDOW_AUTOSIZE);
imshow("onlywhite", onlywhite);

// apply the mask to the depth image
Mat nocalibration;
depth.copyTo(nocalibration, onlywhite);

//show the result
namedWindow("nocalibration", CV_WINDOW_AUTOSIZE);
imshow("nocalibration", nocalibration);


waitKey(0);
destroyAllWindows;
return 0;
}

output of the program:

enter image description here

As can be seen in the output of my program when i apply the onlywhite mask to the depth image the quad copter body does not consist out of 1 color. The reason for this is that there is a miss match between the 2 images.

I know that i need calibration parameters of my camera and i got these from the last person who worked with this setup. Did the calibration in Matlab and this resulted in the following.

Matlab calibration esults:

https://i.sstatic.net/JwFi5.png

I have spent allot of time reading the following opencv page about Camera Calibration and 3D Reconstruction ( cannot include the link because of stack exchange lvl)

But i cannot for the life of me figure out how i could accomplish my goal of adding the correct depth value to each colored pixel.

I tried using reprojectImageTo3D() but i cannot figure out the Q matrix. i also tried allot of other functions from that page but i cannot seem to get my inputs correct.

Upvotes: 2

Views: 3560

Answers (2)

rhcpfan
rhcpfan

Reputation: 567

As far as I know, Matlab has very good support for Kinect (especially for v1). You may use a function named alignColorToDepth, as follows:

[alignedFlippedImage,flippedDepthImage] = alignColorToDepth(depthImage,colorImage,depthDevice)

The returned values are alignedFlippedImage (the RGB registrated image) and flippedDepthImage (the registrated depth image). These two images are aligned and ready for you to process them.

You can find more at this MathWorks documentation page.

Hope it's what you need :)

Upvotes: 1

Brian Lynch
Brian Lynch

Reputation: 542

As far as I can tell, you are missing the transformation between camera coordinate frames. The Kinect (v1 and v2) uses two separate camera systems to capture the depth and RGB data, and so there is a translation and rotation between them. You may be able to assume no rotation, but you will have to account for the translation to fix the misalignment you are seeing.

Try starting with this thread.

Upvotes: 0

Related Questions