Nome Cognome
Nome Cognome

Reputation: 15

Bilinear interpolation on fisheye filter

I have to implement a fisheye transfromation with bilinear interpolation. After the transformation of one pixel i don't have integer coordinates anymore and I would like to map this pixel on integer coordinates using bilinear interpolation. The problem is that everithing I found on bilinear interpolation on the inetrnete (see for example Wikipedia) does the opposite thing: it gives the value of one non-integer pixel by using the coordinates of four neighbors that have integer coordinates. I would like to do the opposite, i.e. map the one pixel with non-integer coordinates to the four neighbors with integer coordinates. Surely there is something that I am missing and would be helpful to understand where I am wrong.

EDIT: TO be more clear: Let say that I have the pixel (i,j)=(2,2) of the starting image. After the fisheye transformation I obtain non-integer coordinates, for example (2.1,2.2). I want to save this new pixel to a new image but obviously I don't know in which pixel to save it because of non-integer coordinates. The easiest way is to truncate the coordinates, but the image quality is not very good: I have to use bilinear interpolation. Despite this I don't understand how it works because I want to split my non integer pixel to neighboring pixels with integer coordinates of the new (transformed image), but I found description only of the opposite operation, i.e. finding non-integer coordinates starting from four integer pixels (http://en.wikipedia.org/wiki/Bilinear_interpolation)

Upvotes: 1

Views: 466

Answers (2)

BConic
BConic

Reputation: 8980

Your question is a little unclear. From what I understand, you have a regular image which you want to transform into a fisheye-like image. To do this, I am guessing you take each pixel coordinate {xr,yr} from the regular image, use the fisheye transformation to obtain the corresponding coordinates {xf,yf} in the fisheye-like image. You would like to assign the initial pixel intensity to the destination pixel, however you do not know how to do this since {xf,yf} are not integer values.

If that's the case, you are actually taking the problem backwards. You should start from integer pixel coordinates in the fisheye image, use the inverse fisheye transformation to obtain floating-point pixel coordinates in the regular image, and use bilinear interpolation to estimate the intensity of the floating point coordinates from the 4 closest integer coordinates.

The basic procedure is as follows:

  1. Start with integer pixel coordinates (xf,yf) in the fisheye image (e.g. (2,3) in the fisheye image). You want to estimate the intensity If associated to these coordinates.
  2. Find the corresponding point in the "starting" image, by mapping (xf,yf) into the "starting" image using the inverse fisheye transformation. You obtain floating-point pixel coordinates (xs,ys) in the "starting" image (e.g. (2.2,2.5) in the starting image).
  3. Use Bilinear Interpolation to estimate the intensity Is at coordinates (xs,ys), based on the intensity of the 4 closest integer pixel coordinates in the "starting" image (e.g. (2,2), (2,3), (3,2), (3,3) in the starting image)
  4. Assign Is to If
  5. Repeat from step 1. with the next integer pixel coordinates, until the intensity of all pixels of the fisheye image have been found.

Note that deriving the inverse fisheye transformation might be a little tricky, depending on the equations... However, that is how image resampling has to be performed.

Upvotes: 2

nimrodm
nimrodm

Reputation: 23839

You need to find the inverse fisheye transform first, and use "backward wrap" to go from the destination image to the source image.

I'll give you a simple example. Say you want to expand the image by a non integral factor of 1.5. So you have

x_dest = x_source * 1.5, y_dest = y_source * 1.5

Now if you iterate over the coordinates in the original image, you'll get non-integral coordinates in the destination image. E.g., (1,1) will be mapped to (1.5, 1.5). And this is your problem, and in general the problem with "forward wrapping" an image.

Instead, you reverse the transformation and write

x_source = x_dest / 1.5, y_source = y_dest / 1.5

Now you iterate over the destination image pixels. For example, pixel (4,4) in the destination image comes from (4/1.5, 4/1.5) = (2.6, 2.6) in the source image. These are non-integral coordinates and you use the 4 neighboring pixels in the source image to estimate the color at this coordinate (in our example the pixels at (2,2), (2,3), (3,2) and (3,3))

Upvotes: 1

Related Questions