user9506206
user9506206

Reputation: 11

Mapping infrared images to color images in the RealSense Library

I currently use an Intel d435 camera. I want to align with the left-infrared camera and the color camera.

the align function provided by the RealSense library has only the ability to align depth and color.

I heard that RealSense Camera is already aligned with the left-infrared camera and the depth camera.

However, I cannot map the infrared image and the color image with this information. The depth image is again set to the color image through the align function. I wonder how I can fit the color image with the left-infrared image that is set to the depth of the initial state.

ㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡ [Realsense Customer Engineering Team Comment] @Panepo The align class used in librealsense demos maps between depth and some other stream and vice versa. We do not offer other forms of stream alignments.

But one suggestion for you to have a try, Basically the mapping is a triangulation technique where we go through the intersection point of a pixel in 3D space to find its origin in another frame, this method work properly when the source data is depth (Z16 format). One possible way to map between two none-depth stream is to play three streams (Depth+IR+RGB), then calculate the UV map for Depth to Color, and then use this UV map to remap IR frame ( remember that Depth and left IR are aligned by design).

Hope the suggestion give you some idea. ㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡ This is the method suggested by Intel Corporation.

Can you explain what it means to be able to solve the problem by creating a UV map using deep and color images? and does the RealSense2 library have a UV map function?

I need your precious answer.

Upvotes: 1

Views: 1725

Answers (2)

kwyang
kwyang

Reputation: 1

The same code as Valeria, but using the latest .net wrapper from Realsense sdk 2 downloaded Apr 2024.

var pointCloud = new PointCloud();
pointCloud.MapTexture(colorFrame);
var points = pointCloud.Process(depthFrame).As<Points>();
//3d points
var vertices = new float[points.Count * 3];
points.CopyVertices(vertices);
//2d map to texture
var texture_map = new float[points.Count * 2];
points.CopyTextureCoords(texture_map);

The texture_map value are normalized from 0 to 1. Because some depth point cant be mapped because the scope of the depth image is larger than the color image. These point would have 0 or negative values.

For the other points, the mapping is like texture_map[2x + ystride] and texture_map[2x+ystride +1] would provide the normalized mapping values of the color pixel corresponding the depth point (x,y). stride is depth image width * 2. The mapped color point would be (texture_map[2x + ystride]* color_width, texture_map[2x+ystride +1] *color_width).

The texture_map array can also be queried by to get depth point from color point

private static System.Drawing.Point MapColor2Depth(System.Drawing.Point pt)
{
    int x = pt.X;
    int y = pt.Y;
    int index = -1;
    if(int_texture_map!=null)
    for(int i=0;i<int_texture_map.Length;i+=2)
    {

        if((x>=int_texture_map[i]-1 && x<= int_texture_map[i] + 1) && 
               ( y>=int_texture_map[i+1]-1  && y <= int_texture_map[i + 1] + 1))
        {
            index = i;
            break;
        }
    }
    if (index >= 0)
        return new System.Drawing.Point((index % stride_)/2, index / stride_);

    return System.Drawing.Point.Empty;
}

int_texture_map is the array that contain actual mapping after computation from the normalized texture_map

Upvotes: 0

Valeria Bogdevich
Valeria Bogdevich

Reputation: 1

Yes, Intel RealSense SDK 2.0 provides class PointCloud. So, you -configure sensors -start streaming -obtain color & depth frames -get UV Map as follows (C#):

var pointCloud = new PointCloud();
pointCloud.MapTexture(colorFrame);
var points = pointCloud.Calculate(depthFrame);
var vertices = new Points.Vertex[depth frame height * depth frame width];
var uvMap = new Points.TextureCoordinate[depth frame height * depth frame width];
points.CopyTo(vertices);
points.CopyTo(uvMap);

uvMap you'll get is a normalized depth to color mapping

NOTE: if depth is aligned to color, size of vertices and UV Map is calculated using color frame width and height

Upvotes: 0

Related Questions