Pavel Podlipensky
Pavel Podlipensky

Reputation: 8269

Computer Vision: Remove Edge Effect

I combine several photos into one (somewhat similar to panorama effect) and I see difference in intensity, especially near edges of the photos. What is the best approach to remove these effects? I guess I should normalize intensity, but probably there are other techniques as well? And if not, how exactly would you normalize intensity of two images? enter image description here

I use OpenCV, so I'd appreciate any code sample in Python or C++. Thanks in advance.

UPDATE

Will Stitcher in opencv class solve my problem? If so, how can I avoid calling estimateTransform on stitcher before combining images? The reason why I want to avoid this call is that my camera doesn't move, so I know exactly location of the stitch. Any help appreciated. Thanks.

Upvotes: 1

Views: 2898

Answers (2)

paghdv
paghdv

Reputation: 564

I think the response depends on how much overlap do your images have. You could try blending but I don't know if your images have different overlapping content. I would try to combine histogram matching (http://en.wikipedia.org/wiki/Histogram_matching) with alpha blending.

Upvotes: 0

Cloud
Cloud

Reputation: 19331

What you are requesting is essentially a two-part modification to a stitching algorithm (ie: an algorithm that pieces together multiple images into a single one).

The first part is de-vignetting, which is essentially correcting the change in brightness in the corner of the images. This is common when taking pictures with wide-angle lenses. The solution isn't exactly trivial, and I don't have a working C++ source example. For the overall problem, my guess is you'll have to learn the math behind it and implement it yourself. I can provide examples for a shell-script program that passes the image through ImageMagik to resolve this problem though. The algorithm itself is fully worked out in layman's terms too.

The second half, involves stitching the now de-vignetted images together. You seem to already have that part down. Now, even with the images having had the brightness artifacts at their edges corrected, the average intensity of each image will be different. This is fixed easily enough via histogram equalization, for which I do have a C++ example.

So, in short:

  1. Apply de-vignetting algorithm to each frame
  2. Apply histogram equalization algorithm to each frame
  3. Stitch all the original frames into a final image

After going through the work involved, you'll see why most of the work involving this is commercial, and the source isn't free.

References


  1. "De-vignetting", Accessed 2014-02-11, http://www.physics.mcmaster.ca/~syam/Photo/

  2. "Image Registration with Global and Local Luminance Alignment", Accessed 2014-02-11, http://www.cse.cuhk.edu.hk/leojia/all_project_webpages/luminance_alignment/luminance_alignment.html

  3. "Histogram equalization using C++: Image Processing", Accessed 2014-02-11, http://www.programming-techniques.com/2013/01/histogram-equalization-using-c-image.html

Upvotes: 4

Related Questions