Marcos G
Marcos G

Reputation: 33

Find unchanged areas on multiple images

I'm trying to detect the areas that don't change between 40-50 images (basically the unchanged pixels). For simplicity I'll provide an example with only 3 images:

This could be the output of the program, a mask showing what was untouched in those 3 images.

I've tried with compare from ImageMagick:

compare *.png -fuzz 20 -compose src mask.png

but doesnt seem to support an array of files, as it only produces the differences between the first two images: mask.png

Iterating trough all the images and joining the masks is discarded because it would generate lots of unwanted files (and probably be slow)

I'm aware that this is the same question as "how to get differences between images", but all the solutions given on those questions doesn't apply when there is more that 2 images

Is there any simple way to do it?

Upvotes: 2

Views: 189

Answers (2)

Mark Setchell
Mark Setchell

Reputation: 207798

Use ImageMagick -evaluate-sequence operator to find the maximum (i.e. brightest) value and the minimum (i.e. darkest) value of the same pixel in each image. I.e., from every image extract the pixel at (0, 0), and from these extracted pixels find the maximum and minimum values.

Then, calculate the difference between the max and min. If there is no difference, it means that that pixel is very likely to have been the same for every single image in the set:

magick *.png -evaluate-sequence max    \
    \( *.png -evaluate-sequence min \) \
    -compose difference -composite -threshold 0 result.png

If you have plenty of RAM, you can avoid reading them twice by making a copy of your sequence in an MPR:

magick *.png -alpha off -write MPR:seq     \
               -evaluate-sequence max      \
    \( MPR:seq -evaluate-sequence min \)   \
    -compose difference -composite -threshold 0 result.png

On reflection, this is actually rather similar to Fred's answer, because it stands to reason that the variance will be zero if the maximum and minimum are the same - though statistics is not my strong point...

Upvotes: 3

fmw42
fmw42

Reputation: 53164

You can do that by computing the standard deviation across all the images and then taking the darkest region. (Low standard deviation means similar). Threshold at some level and negate so that those regions are white. This can be done using one of my bash unix shell scripts for imagemagick, called stdimage, which does the std across all the input images. Then threshold and negate.

Images:

enter image description here

enter image description here

enter image description here

stdimage image1.png image2.png image3.png miff:- | convert - -threshold 0 -negate result.png

enter image description here

Without my script, one could compute the std across all images by using -fx.

If you have one image that is just the background, then you could subtract that from every image and threshold. Then multiply all the thresholded image together using -evaluate-sequence multiply. That would get the same result after negating.

Upvotes: 3

Related Questions