Reputation: 213
I am working on a 3D model reconstruction application with Kinect sensor. I use Microsoft SDK to get depth data, I want to calculate the location of each point in the real-world. I have read several articles about it and I have implemented several depth-calibration methods but all of them do not work in my application. the closest calibration was http://openkinect.org/wiki/Imaging_Information but my result in Meshlab was not acceptable. I calculate depth value by this method:
private double GetDistance(byte firstByte, byte secondByte)
{
double distance = (double)(firstByte >> 3 | secondByte << 5);
return distance;
}
and then I used below methods to calculate distance of real-world
public static float RawDepthToMeters(int depthValue)
{
if (depthValue < 2047)
{
return (float)(0.1 / ((double)depthValue * -0.0030711016 + 3.3309495161));
}
return 0.0f;
}
public static Point3D DepthToWorld(int x, int y, int depthValue)
{
const double fx_d = 5.9421434211923247e+02;
const double fy_d = 5.9104053696870778e+02;
const double cx_d = 3.3930780975300314e+02;
const double cy_d = 2.4273913761751615e+02;
double depth = RawDepthToMeters(depthValue);
Point3D result = new Point3D((float)((x - cx_d) * depth / fx_d),
(float)((y - cy_d) * depth / fy_d), (float)(depth));
return result;
}
these methods did not work well and generated scene was not correct. then I used below method, the result is better than previous method but it is not acceptable yet.
public static Point3D DepthToWorld(int x, int y, int depthValue)
{
const int w = 640;
const int h = 480;
int minDistance = -10;
double scaleFactor = 0.0021;
Point3D result = new Point3D((x - w / 2) * (depthValue + minDistance) * scaleFactor * (w/h),
(y - h / 2) * (depthValue + minDistance) * scaleFactor,depthValue);
return result;
}
I was wondering if you let me know how can I calculate real-world position based on my depth pixel values calculating by my method.
Upvotes: 1
Views: 5978
Reputation: 2702
The getDistance()
function you're using to calculate real depth is referred to kinect player detection. So check that you are opening your kinect stream accordingly or maybe you should get only the raw depth data
Runtime nui = Runtime.Kinects[0] ;
nui.Initialize(RuntimeOptions.UseDepth);
nui.DepthStream.Open(
ImageStreamType.Depth,
2,
ImageResolution.Resolution320x240,
ImageType.Depth);
and then compute depth by simply bitshifting second byte by 8:
Distance (0,0) = (int)(Bits[0] | Bits[1] << 8);
The first calibration methods should work ok even if you could do a little improvement using a better approximation given by Stéphane Magnenat:
distance = 0.1236 * tan(rawDisparity / 2842.5 + 1.1863) in meters
If you really need more accurate calibration values you should really calibrate your kinect using for example a tool such as the matlab kinect calibration:
http://sourceforge.net/projects/kinectcalib/
And double check obtained values with the ones you are currently using provided by Nicolas Burrus.
EDIT
Reading your question again I noticed that you are using Microsoft SDK, so the values
that are returned from kinect sensor are already real distances in mm. You do not need to use the RawDepthToMeters()
function, it should be used only with non official sdk.
The hardware creates a depth map, that it is a non linear function of disparity values, and it has 11 bits of precision. The kinect sdk driver convert out of the box this disparity values to mm and rounds it to an integer. MS Kinect SDK has 800mm to 4000mm depth range.
Upvotes: 1