Toba Tek Singh
Toba Tek Singh

Reputation: 21

How do I detect defects in two similar but misaligned images?

I'm trying to detect missing segments on an LCD Screen. The idea is to compare the segments with a reference image and detect if there are any segments missing.

These are my sample images,

Image 1:

image1

Image 2:

image2

I am ideally looking for a method that can tell which part of the image or which segment is not correct.

What I have tried so far,

  1. Absolute Difference Note that since the two images above are slightly misaligned, Taking the absolute difference of the two images returns this image difference

This is clearly not helpful.

Code in EmguCV

Image<Gray, byte> im1 = new Image<Gray, byte>(@"E:\code\misalign_detect\im1.bmp");
Image<Gray, byte> im2 = new Image<Gray, byte>(@"E:\code\misalign_detect\im2.bmp");

CvInvoke.AbsDiff(im1, im2, im2);
im2.Save($"im1im2 difference.bmp");
  1. Histogram Comparison and Euclidean Distance For these two images, it seems to work. The histogram comparison metric returns 0.93 which might be conclusive enough to call them different. However,

A. It doesn't tell me where the differences are
B. The score doesn't do well when only a few segments are different between the images

Code

private double ImageComparision(Mat testImage, Mat refImage)
{
      double retStatus = 0.0f;
      double m_total = 0.0f;
      try
      {
          //Create four ROI of test image
          List<DenseHistogram> m_testROIHisto = MakeFourROIofImage(testImage.ToImage<Gray, Byte>());

          //Create four ROI of reference image
          List<DenseHistogram> m_ReferenceROIHisto = MakeFourROIofImage(refImage.ToImage<Gray, Byte>());


          for (int i = 0; i < 4; i++)
          {
              DenseHistogram hist_test1 = m_testROIHisto[i];
              DenseHistogram hist_test2 = m_ReferenceROIHisto[i];
              double cBlue = CvInvoke.CompareHist(hist_test1, hist_test2, HistogramCompMethod.Correl);

              m_total += cBlue;
          }
      }
      catch (Exception ex)
      {
          MessageBox.Show("Exception in ImageComparision() " + ex.ToString());
      }

      retStatus = m_total / 4;

      return retStatus;
}

/// <summary>
/// Function used to make Four ROI of Image
/// Then compute Histogram of each ROI
/// </summary>
private List<DenseHistogram> MakeFourROIofImage(Image<Gray, Byte> img)
{
      int m_height = img.Height;
      int m_width = img.Width;
      List<DenseHistogram> m_imgList = new List<DenseHistogram>();
      for (int i = 0; i < m_width;)
      {
          for (int j = 0; j < m_height;)
          {
              img.ROI = new Rectangle(i, j, (m_width / 2), (m_height / 2));
              //cv::Mat m_roiImg = img(rectangle);
              Image<Gray, Byte> m_roiImg = img.Copy();

              // Create and initialize histogram
              DenseHistogram hist = new DenseHistogram(256, new RangeF(0.0f, 255.0f));

              // Histogram Computing
              hist.Calculate<Byte>(new Image<Gray, byte>[] { m_roiImg }, true, null);
              m_imgList.Add(hist);

              j += (m_height / 2);
          }
          i += (m_width / 2);
      }
      return m_imgList;
}

  1. Template Matching I'm currently following a crude method of cropping every digit from the reference image and then trying to find a good match of it in the current image. The opencv MatchTemplate() function is translation invariant but it is not rotation invariant -- therefore it fails often when the LCD screen is slightly rotated due to physical variations.

Code

private Point GetBestImageMatch(Image<Gray, Byte> grayimg, Image<Gray, Byte> templateimg, double thresh = 0.8)
{
      grayimg = grayimg;
      templateimg = templateimg;

      int rcols = grayimg.Cols - templateimg.Cols + 1;
      int rrows = grayimg.Rows - templateimg.Rows + 1;
      Image<Gray, float> result = new Image<Gray, float>(rrows, rcols);

      // perform matching
      CvInvoke.MatchTemplate(grayimg, templateimg, result, Emgu.CV.CvEnum.TemplateMatchingType.CcoeffNormed);

      // check results
      double minv = 0, maxv = 0;
      Point minLoc = new Point(), maxLoc = new Point();
      CvInvoke.MinMaxLoc(result, ref minv, ref maxv, ref minLoc, ref maxLoc);

      if(maxv < thresh)
      {
          return new Point(-1, -1);
      }

      return maxLoc;
}

I have made the expected result image by moving the screen area in paint until it overlapped the reference image. Here's the absolute difference operation after that,

Expected difference:

expected_difference of Image1 and Image2

Edit1:

Hans suggested I should try Image Registration, I think he means this and I guess mapAffine would be somewhat relevant. However, I was unable to find a tutorial for mapShift or mapAffine. Instead I found this - Image Alignment in OpenCV

I've rewritten the code in EmguCV below but it throws an Exception thrown: 'Emgu.CV.Util.CvException' in Emgu.CV.World.dll at FindTransformECC(). I'm not sure why

Mat im1 = new Image<Gray, byte>(@"E:\code\panel1.bmp").Mat;
Mat im2 = new Image<Gray, byte>(@"E:\code\panel1_shifted.bmp").Mat;

MotionType wrapMode = MotionType.Euclidean;
Mat warp_matrix = Mat.Eye(2, 3, DepthType.Cv32F, 1);
int number_of_iterations = 5000;
double termination_eps = 1e-10;
MCvTermCriteria criteria = new MCvTermCriteria(number_of_iterations, termination_eps);
CvInvoke.FindTransformECC(im1, im2, warp_matrix, wrapMode, criteria);

Mat im2_aligned = new Image<Gray, byte>(im1.Size).Mat;
CvInvoke.WarpPerspective(im2, im2, warp_matrix, im1.Size, Inter.Linear);
myPicBox.Image = im2.Bitmap;

Upvotes: 2

Views: 1131

Answers (1)

Quergo
Quergo

Reputation: 928

As long as your images have only a translational shift you can perform image registration quite simple with EmguCV using PhaseCorrelation.

pathToImg1 refers to your first example image, pathToImg2 to your second.

    //load images
    var m1 = new Mat(<pathToImg1>, ImreadModes.Grayscale);
    var m2 = new Mat(<pathToImg2>, ImreadModes.Grayscale);
    
    //Convert depth to be processible by phase correlation function
    var m3 = new Mat();
    var m4 = new Mat();
    m1.ConvertTo(m3, DepthType.Cv32F);
    m2.ConvertTo(m4, DepthType.Cv32F);
    
    //Detect translation
    MCvPoint2D64f shift = CvInvoke.PhaseCorrelate(m3, m4, null, out _);
    
    //Setup affine transformation matrix
    var translateTransform = new Matrix<float>(2, 3)
    {
        [0, 0] = 1.0f,
        [1, 1] = 1.0f,
        [0, 2] = Convert.ToSingle(shift.X),
        [1, 2] = Convert.ToSingle(shift.Y)
    };
    
    //Translate image1
    CvInvoke.WarpAffine(m1, m1, translateTransform, m1.Size, Inter.Area);
    
    //Get diff
    CvInvoke.AbsDiff(m1, m2, m2);
    
    
    m2.Save(<outPath>\result.png");

For your images this gave me following result:

enter image description here

The left and bottom border artifacts come from the translational shift. You can cut it off if you need.

Upvotes: 1

Related Questions