Nasher
Nasher

Reputation: 21

Implementing an average filter doesn't work in OpenCV, but works in Java

What a pain this is. Basically, all I'm trying to do is implement a simple blur filter on a image. I'm using the same image which is a 24bit bitmap (Lena512.bmp). Now the thing is, the Java implementation I wrote works as expected. However, the C implementation, instead of blurring the region I want, actually darkens the region and looks weird. I'm not too sure if this is something to do with RGB (BGR) conversion. I'm able to get a particular pixel to a colour and read the same color back.

I want to learn stuff about image processing, not rely on the OpenCV library for everything, just for grabbing pictures from my web cam and from my disk.

This is my Java implementation that works. Img contains the 24bit lena512.bmp.

private void applyFilter()
{       
    //Apply the filter 3 times
    for (int n = 0; n < 3; n++) {
        for (int y=100; y < 300; y++) {
            for (int x=100; x<300; x++) {
                int r = 0,g = 0,b = 0;
                    for (int j = -1; j <= 1; j++) {
                        for (int k=-1; k<=1; k++) {
                            int pixel = img.getRGB(x + k, y + j);
                            r += (new Color(pixel).getRed());
                            g += (new Color(pixel).getGreen());
                            b += (new Color(pixel).getBlue());
                        }
                    }
                    r = r / 9; //Assume a 3x3 matrix, to average out.
                    g = g / 9;
                    b = b / 9;
                    img.setRGB(x, y, new Color(r,g,b).getRGB());
                }
            }
        }
}

My C implementation looks like this. Img contains 24bit lena512.bmp.

void applyAverage(IplImage *img)
{
    int x,y;
    int nx, ny;

    for ( y = 100; y < 400; y++) {
            for ( x = 100; x < 400; x++) {
                    unsigned char r,g,b;
                    for ( ny = -1; ny <= 1; ny++) {
                            for ( nx = -1; nx <= 1; nx++) {
                                    getPixel(img,x + nx,y + ny,&r, &g, &b);
                                    r += r;
                                    g += g;
                                    b += b;
                            }
                    }
                    r = r / 9;
                    g = g / 9;
                    b = b / 9;
                    setPixel(img, x, y, &r, &g, &b);
            }
    }
}

Within the main method of the C implementation, I'm loaded the image as follows:

IplImage *frame = cvLoadImage("lena512.bmp",1);

My getPixel and getPixel works, as I'm able to set and get these (I'm aware of the RGB ---> BGR byte ordering, which isn't the problem). Obviously, I'm missing something here.

Thanks people.

I've linked (my rep is too low) a picture of the problem. Left is the java implementation, right is the OpenCV one.

http://i176.photobucket.com/albums/w189/Phil128/Problem_zps431a4a4f.png

Upvotes: 0

Views: 228

Answers (2)

Nasher
Nasher

Reputation: 21

Sorted. Silly me. I was using an unsigned char, instead of an int. What happens when I lack sleep :-D.

Upvotes: 0

s.bandara
s.bandara

Reputation: 5664

Have a close look at this piece of your code that is supposed to accumulate intensity values:

getPixel(img,x + nx,y + ny,&r, &g, &b);
r += r;
...

getPixel keeps overwriting the accumulator variables r, g, b in every iteration. This means, after you've queried the last pixel of the filter region. r, g, and b will just hold the values of that last pixel. r += r and eventually r = r / 9 mean therefore that the intensity of each filtered pixel in the end will be approximately just two ninths of the intensity of its bottom-right neighbor in the original. Hence the region becoming very dark.

Try using separate variables to receive the intensity values of each pixel in the filter region such as curr_r, curr_g, and curr_b in:

int r = 0, g = 0, b = 0;
for ( ny = -1; ny <= 1; ny ++) {
    for ( nx = -1; nx <= 1; n x++) {
        int curr_r, curr_g, curr_b; 
        getPixel(img, x + nx, y + ny, & curr_r, & curr_g, &curr_b);
        r += curr_r; 
        g += curr_g;
        b += curr_b;
    }
}
r = r / 9;
g = g / 9;
b = b / 9;

Here, getPixel will place the intensity values into curr_r, for example, then curr_r is added to the accumulating variable r.

Another thing. You are passing the addresses of r, g, and b to setPixel as in setPixel(img, x, y, & r, & g, & b);. It could be true that is the correct way to do it, but I'd rather expect setPixel to accept the values as in setPixel(img, x, y, r, g, b);. It would be worth double-checking the declaration of that function.

Upvotes: 2

Related Questions