Pavan
Pavan

Reputation: 18518

What algorithm could I use to apply a fisheye distortion on an image?

Can anyone guide me in the best way of developing a filter algorithm for video processing?

Say for example I wanted to apply a fisheye lens filter onto an image, how would I process the pixels so that it would mimic this effect?

If I wanted to make the picture look more red, then I would deduct values from the blue and green components at each pixel, leaving behind only the red component.

This kind of distortion is more than just color processing, so I'd like to know how to manipulate the pixels in the correct way to mimic a fisheye lens filter, or say a pinch filter, and so forth.

EDIT:

Filter algorithm for VIDEO PROCESSING*

Upvotes: 3

Views: 6987

Answers (2)

Brad Larson
Brad Larson

Reputation: 170317

As Martin states, to apply a distortion to an image, rather than just a color correction, you need to somehow displace pixels within that image. You generally start with the output image and figure out which input pixel location to grab from to fill in each location in the output.

For example, to generate the pinch distortion I show in this answer, I use an OpenGL ES fragment shader that looks like the following:

 varying highp vec2 textureCoordinate;

 uniform sampler2D inputImageTexture;

 uniform highp vec2 center;
 uniform highp float radius;
 uniform highp float scale;

 void main()
 {
     highp vec2 textureCoordinateToUse = textureCoordinate;
     highp float dist = distance(center, textureCoordinate);
     textureCoordinateToUse -= center;
     if (dist < radius)
     {
         highp float percent = 1.0 + ((0.5 - dist) / 0.5) * scale;

         textureCoordinateToUse = textureCoordinateToUse * percent;
     }
     textureCoordinateToUse += center;

     gl_FragColor = texture2D(inputImageTexture, textureCoordinateToUse );

 }

This GLSL code is applied to every pixel in the output image. What it does is calculate the distance from the center of the region being pinched to the current pixel coordinate. It then takes that input distance and scales that based on the input scale parameter. This new scaled distance is used to displace the coordinate in the input image where the output color will be read from.

Sampling a color from the input image at a displaced coordinate for each output pixel is what produces a distorted version of the input image. As you can see in my linked answer, slightly different functions for calculating this displacement can lead to very different distortions.

Upvotes: 5

Martin Beckett
Martin Beckett

Reputation: 96139

You apply an image warp. Basically for each point in the transformed output image you have a mathematical formula that calculates where that point would have come from in the original image, you then just copy the pixel at those corrdinates - opencv has functions to do this.

Normally of course you are trying to remove optical effects like fish-eye, but the principle is the same.

ps. It's a little confusing to think of starting with the result and working back to the source but you do it this way because many points in the source image might all go to the same point in the result and ou want an even grid of resulting pixels.

Upvotes: 3

Related Questions