Reputation: 1675
I have an RGBA image (below) and I want to calculate different quality metrics comparing the original image to a distorted version. While doing this, I would like to neglect the effect of background and only focus on the object. First, I extract the alpha channel from the original to create an alpha mask and create a version of the image without the alpha channel.
convert man1_orig.png -alpha extract man1_mask.png
convert man1_orig.png -alpha off man1.png
Then I create a distorted version and pass the alpha mask to the -read-mask
flag of compare
and compute PSNR and SSIM:
magick man1.png -quality 5% distorted.jpg
compare -metric PSNR -read-mask man1_mask.png man1.png distorted.jpg diff_mask.png
// 23.9876
compare -metric SSIM -read-mask man1_mask.png man1.png distorted.jpg diff_mask.png
// 0.192216
However, when I repeat the same experiments using the full image (no alpha), I get different results with PSNR, as expected, but SSIM yields the same result:
compare -metric PSNR man1.png distorted.jpg diff_full.png
// 27.7426
compare -metric SSIM man1.png distorted.jpg diff_full.png
// 0.192216
For the full image, I think that the PSNR is higher because the background pixels in both undistorted and distorted images are very similar and contribute positively to the result. For SSIM, I suspect that -read-mask
doesn't have an effect. Is this feature not implemented for SSIM, or perhaps does it not make any sense to measure SSIM on a masked region of an image?
Upvotes: 0
Views: 287