Reputation: 357
I have been provided with a 16-bit greyscale matrix (Mat type = CV_16UC1) that I would like to colorize using OpenCV and a custom colormap gradient stored in an integer array. The colormap contains 256 RGB colors and is of the below format:
const int colormap[] = {R0, G0, B0, R1, G1, B1, ..., R255, G255, B255};
The grey-scale matrix contains a single unsigned char (between 0 and 255) per pixel point which I use as an index to find the corresponding RGB color in the colormap
array.
The new colorized Mat is initialized as all black, with the same dimensions as the greyscale Mat, with the exception that it is a 3 channel matrix.
cv::Mat colorizedMat = cv::Mat::zeros(matGreyScale.rows, matGreyScale.cols, CV_16UC3);
I then use the below code to loop over each pixel of the colorizedMat
and update it with the RGB values obtained from the colormap
:
for (int row = 0; row < colorizedMat.rows; row++){
for (int col = 0; col < colorizedMat.cols; col++) {
int pixelVal = matGreyScale.at<uchar>(cv::Point(col,row));
cv::Vec3b color(colormap[pixelVal*3+2],colormap[pixelVal*3+1],colormap[pixelVal*3]);
colorizedMat.at<cv::Vec3b>(cv::Point(col,row)) = color;
}
}
However, this has resulted in the colorizedMat
only being filled approximately halfway leaving the other half completely black as shown below. The image also seems slightly stretched.
Original grey scale image: Colorized image:
If I change the bounds of the inner for loop to col < colorizedMat.cols*2
, the full Mat is colored in, however, the image is very stretched.
How would I correctly colorize a greyscale Mat using a custom colormap? Any help is greatly appreciated!
The images are tiny because the camera I'm receiving them from has has an 80x60 resolution.
Upvotes: 2
Views: 1405
Reputation: 342
A few issues I see.
Your matGreyScale
was declared as CV_16UC1. However, you tried to access its elements with matGreyScale.at<uchar>(cv::Point(col,row));
. This is incorrect because CV_16UC1 should be accessed using matGreyScale.at<unsigned short>(...)
. It's confusing because CV_16UC1
sounds like "unsigned char" but it's really "16-bit, unsigned, channels = 1)". See: http://dovgalecs.com/blog/opencv-matrix-types/
If you declare colorizedMat
as CV_16UC3
you should create vectors of type Vec3s
for populating this matrix. Alternatively, you can do as Q.H. suggested above and declare colorizedMat
as CV_8UC3
and stick with creating vectors of type Vec3b
(which makes more sense as your colormapped data is scaled 0-255). See:
http://docs.opencv.org/2.4/modules/core/doc/basic_structures.html#vec
Upvotes: 3
Reputation: 150735
The problem is that you declared colorizedMat
with incorrect depth:
cv::Mat colorizedMat = cv::Mat::zeros(matGreyScale.rows, matGreyScale.cols, CV_16UC3);
It should be CV_8UC3
instead of CV_16UC3
. Since Vec3b
is the same with Vec<uchar, 3>
, you only update half of colorizedMat.data
with the command
colorizedMat.at<cv::Vec3b>(cv::Point(col,row)) = color;
Also, to be safe, declare color map as uchar
(or unsigned char
) type:
const uchar colormap[] = {R0, G0, B0, R1, G1, B1, ..., R255, G255, B255};
Upvotes: 1
Reputation: 920
I'm not familiar with OpenCV. But I think It's wrong order when your call Point()
Point_ (_Tp _x, _Tp _y)
Template class for 2D points specified by its coordinates x and y.
Change your code like :
Point(row,col)
Might helps, might not.
Upvotes: -1