Reputation: 78
WE are doing a project using kinect and opencv. I am completely new to 3D analysis. Main aim is to take the depth and rgb information from kinect,process the rgb mat ( basic filtering and threshold functions) and combine the processed rgb image with the original depth information. i need basic steps to follow with suitable applications to use ( opencv,opengl,openni,kinect sdk etc)
1) how to pass the depth and rgb information from kinect to opencv?
2) can we individually access the rgb image to process it?
3) how to combine the two and what functions to use to display the output?
We are using windows-64 bit and kinect as 3D sensor. We are planning to use OpenNI to get the image rgb and depth information and then process rgb in OpenCV and then display the Image (Processed RGB + depth) in a window with help of OpenGL.
Upvotes: 4
Views: 3347
Reputation: 11420
First of all, I think your question is too general, it will depend on what libraries/drivers you use... For instance, if you use openni then you pass the information one way and if you use the kinect sdk you will have to use another method...
2) yes you can access the RGB images independently of the depth information and process it...
3) to combine the two, I suppose you meant depth and rgb. This is also dependent on the libraries you use and the drivers. Also the way you want to store it, e.g. point clouds, image, etc.
My suggestion is to define exactly what you want to achieve, with which libraries and in which operating system, and edit your question.
I created a tool using openni2, opencv and Qt, I have only tested with primesense cameras and structure.io cameras in linux, but you may get an idea of what to do exactly. This tool can do some basic thresholding and saving the data in different formats (pcd, images, oni). Link
I also used almost the same approach with openni1 and Avin's driver and a Kinect1 camera, on wednesday I can upload the code and you can take a look, but basically is the same, only the initialization changes a little bit.
As a basic checklist of things you should do when working with RGB-D images:
Also you may take a look at the Point Cloud Library, they have a nice wrapper that will do the job and then it is easy to turn it into a pointcloud.
I hope this information helps you.
UPDATE: For a Kinect 1 in windows, it is possible to install the latest Kinect SDK and use openni2 without installing additional drivers.
If everything is good up to here, then you only need to:
I can give you code snippets if you are in doubt in any of the steps, just ask a new question about it and I will answer it.
I hope this helps you
Upvotes: 3