Reputation: 11
This may be a simple question, but I have a problem when resizing a bayer image(RGGB) resided in a cv::Mat
with CV_8UC1
type. That is I get a gray image after resized when shown with cv::imshow
(before show it I convert the reiszedimg
to a bgr image).
Resizing bayer image codes:
cv::resize(bayerimg, resizedimg, cv::Size(), 0.6, 0.8);
I think it is because the special structure of bayer image, so one cannot use the resize operation from opencv directly on a bayer image.
If so, how can I resize a bayer image to random ratial directly, is there any algorithm for this task?
Besize, I want a c++ solution, thank you, guys.
I have tried
cv::resize(bayerimg, resizedimg, cv::Size(), 0.6, 0.8);
cv::cvtColor(resizedimg, bgr, cv::COLOR_BayerBG2BGR);
cv::imshow("rawimg", bgr);
cv::waitKey(0);
But only get a gray image. What is wrong.
Upvotes: 0
Views: 315
Reputation: 2792
This represents a bare bones proof of concept method to sum Bayer images and make a subsampled one scaled to be smaller by an integer factor. The width and height should be divisible by the chosen scale factor or the right hand edge and bottom line will be corrupted (the latter may cause access faults). It can handle independent scaling for width and height.
It accepts raw Bayer mask data as an array of unsigned char and returns one scaled down by nx and ny in a similar fashion. Only handles the binary data you have to provide it with the dimensions from the header.
I think it will somewhat distort the phase of the red and blue channels when scale factors are even numbers but that it should be a fairly good approximation for odd scale factors. I'd be interested to see how it performs on actual raw Bayer images. I'm aware that plenty of test patterns can be constructed to make it fail (but the same applies to all Bayer masked data).
I expect the results to be inferior to reconstructing the full 24 bit RGB colour image and then resampling it according to the Bayer rules but I think that they might well be acceptable for some purposes (albeit at lower resolution and smaller physical size). It might do what the OP wants if he can live with the chroma edge artefacts that will be introduced.
The code has only had the most cursory testing but reproduces expected results on the snippet of test data included in the code. It has only been tested for nx, ny = 2,3 and on tiny amounts of data. Here is the code:
#include <stdio.h>
unsigned char test[] = { 0, 1, 2, 4, 1, 16, 2, 32, 48, 64, 72, 96, 128, 0, 128, 64,
5, 6, 7, 8, 1, 0, 2, 0, 64, 48, 96, 72, 0, 128, 64, 128,
8, 9, 10, 11, 4, 64, 8, 128, 64, 48, 96, 72, 0, 128, 64, 128,
12, 13, 14, 255, 3,0, 4, 0, 0, 0, 4, 4,128, 1, 23, 54, 1,
1, 2, 2, 3, 3, 5,7, 9, 11 }; // extras added for testing 3x3
void DebugPrint(const char *name, unsigned char* buffer, int n, int m=99999)
{
if (n > 100) return;
printf("\nDebug Print: %s\n",name);
for (int i = 0; i < n; i++) // dump n elements of array in linear form or line break every m
{
if (!(i % m)) printf("\n");
printf("%4u", buffer[i]);
}
printf("\n");
}
void DownSampleBayer(unsigned char* inbuff, unsigned char* outbuff, int width, int height, int nx, int ny)
{
int norm = nx * ny;
if (width <= 16) {
printf("\nTesting with dimensions %i x %i scale factors nx = %i, ny = %i", width, height, nx, ny);
DebugPrint("original", test, width * height, width);
}
if (width % nx) printf("warning: width %i is not a multple of scale factor %i\n", width, nx);
if (height % nx) printf("warning: height %i is not a multple of scale factor %i\n", height, ny);
for (int iy = 0; iy < height; iy += 2*ny)
for (int ix = 0; ix < width; ix += 2*nx) // inner loop processes Nx x Ny 2x2 Bayer blocks
{
int isx = iy * width + ix; // source Bayer array index
int idx = (ix + iy*width / ny) / nx; // destination Bayer array index
unsigned int x00, x01, x10, x11; // Bayer block x00 x01
x00 = x01 = x11 = x10 = 0; // x01 x11
for (int j = 0; j < 2 * ny * width; j += 2 * width) // sum over ny blocks down
{
for (int k = 0; k < 2 * nx; k += 2) // sum over nx blocks across
{
int ibay = isx + j + k;
x00 += inbuff[ibay];
x01 += inbuff[ibay + 1];
x10 += inbuff[ibay + width];
x11 += inbuff[ibay + width + 1];
}
}
outbuff[idx] = (x00+norm/2) / norm;
outbuff[idx + 1] = (x01 + norm / 2) / norm;
outbuff[idx + width / nx] = (x10 + norm / 2) / norm;
outbuff[idx + width / nx + 1] = (x11 + norm / 2) / norm;
}
DebugPrint("out scaled array", outbuff, width * height / norm, width / nx);
DebugPrint("out linear", outbuff, 20);
}
int main()
{
// subject only to very limted testing on test[] data above
// assumes that test is a raw consecutive byte Bayer masked data array
// outputs a new Bayer array downscaled by nx in width and ny in height.
// the resulting array averages over each component in turn
// appears to work OK for nx,ny = 2 or 3 untested beyond that or for large arrays
unsigned char outbuff[100];
for (int i = 0; i < 100; i++) outbuff[i] = 0;
DownSampleBayer(test, outbuff, 16, 4, 2, 2);
for (int i = 0; i < 100; i++) outbuff[i] = 0;
DownSampleBayer(test, outbuff, 12, 6, 3, 3);
for (int i = 0; i < 100; i++) outbuff[i] = 0;
DownSampleBayer(test, outbuff, 6, 12, 3, 3);
}
I'm curious to see what artefacts this treatment produces on normal images. I'm pretty sure that there will be some along the lines of the links that @ChristophRackwitz has already posted. I suspect 2x subsampling is a worst case for deBayering and that 3x might be quite close to being well behaved.
The assumptions that go into deBayering have clearly been violated so sharp black to white edge transitions will suffer from chromatic artefacts (all Bayer images do to some extent).
Thinking about it some more whist writing this post I reckon I can create a weighted summation to get a much closer estimate to what a larger Bayer sensor would have actually measured. Anyway this toy code should be enough for a discussion of whether or not the artefacts are tolerable or a more sophisticated treatment is needed. That would require knowing the Bayer pattern to treat R, G, B correctly whereas simple binning doesn't.
Upvotes: 0