Reputation: 613
I am trying to map coordinates from the color space to the camera space. The code I am using is the following:
HRESULT ModelRecognizer::MapColorToCameraCoordinates(const std::vector<ColorSpacePoint>& colorsps, std::vector<CameraSpacePoint>& camerasps)
{
//Access frame
HRESULT hr = GetDepthFrame();
if (SUCCEEDED(hr))
{
ICoordinateMapper* pMapper;
hr = m_pKinectSensor->get_CoordinateMapper(&pMapper);
if (SUCCEEDED(hr))
{
CameraSpacePoint* cameraSpacePoints = new CameraSpacePoint[cColorWidth * cColorHeight];
hr = pMapper->MapColorFrameToCameraSpace(nDepthWidth * nDepthHeight, depthImageBuffer, cColorWidth * cColorHeight, cameraSpacePoints);
if (SUCCEEDED(hr))
{
for (ColorSpacePoint colorsp : colorsps)
{
long colorIndex = (long)(colorsp.Y * cColorWidth + colorsp.X);
CameraSpacePoint csp = cameraSpacePoints[colorIndex];
camerasps.push_back(csp);
}
}
delete[] cameraSpacePoints;
}
}
ReleaseDepthFrame();
return hr;
}
I do not get any errors, however, the result seems to be rotated by 180 degrees and has an offset. Does anyone have suggestions what I am doing wrong? Any help is appreciated.
Just to give a bigger picture why I need this:
I am tracking colored tape pasted on a table from the color image using open cv. Then I create walls at the locations of the tape in a 3D mesh. Furthermore, I am using KinectFusion to generate a mesh of the other objects on the table. However, when I open both meshes in Meshlab the misalignment can clearly be seen. As I assume KinectFusion's mesh is created correctly in the CameraSpace and I create the mesh of the walls exactly at the CameraSpacePoints returned by the above function, I am pretty sure that the error lies in the CoordinateMapping procedure.
Images showing the misalignment can be found at https://i.sstatic.net/DhQU7.png , https://i.sstatic.net/tZhKT.png
Upvotes: 4
Views: 5058
Reputation: 613
I finally figured it out: For whatever reason the returned CameraSpacePoints were mirrored at the origin in X and Y, however not in Z. If anyone has an explanation for this I am still interested.
It works with the following code now:
/// <summary>
/// Maps coordinates from ColorSpace to CameraSpace
/// Expects that the Points in ColorSpace are mirrored at x (as Kinects returns it by default).
/// </summary>
HRESULT ModelRecognizer::MapColorToCameraCoordinates(const std::vector<ColorSpacePoint>& colorsps, std::vector<CameraSpacePoint>& camerasps)
{
//Access frame
HRESULT hr = GetDepthFrame();
if (SUCCEEDED(hr))
{
ICoordinateMapper* pMapper;
hr = m_pKinectSensor->get_CoordinateMapper(&pMapper);
if (SUCCEEDED(hr))
{
CameraSpacePoint* cameraSpacePoints = new CameraSpacePoint[cColorWidth * cColorHeight];
hr = pMapper->MapColorFrameToCameraSpace(nDepthWidth * nDepthHeight, depthImageBuffer, cColorWidth * cColorHeight, cameraSpacePoints);
if (SUCCEEDED(hr))
{
for (ColorSpacePoint colorsp : colorsps)
{
int colorX = static_cast<int>(colorsp.X + 0.5f);
int colorY = static_cast<int>(colorsp.Y + 0.5f);
long colorIndex = (long)(colorY * cColorWidth + colorX);
CameraSpacePoint csp = cameraSpacePoints[colorIndex];
camerasps.push_back(CameraSpacePoint{ -csp.X, -csp.Y, csp.Z });
}
}
delete[] cameraSpacePoints;
}
}
ReleaseDepthFrame();
return hr;
}
Upvotes: 3