Mridula Madhusudan
Mridula Madhusudan

Reputation: 81

How to compare a video with a reference video using OpenCV and Python?

I need to compare 2 videos and check whether they are the same. This check is to actually find whether the video contains any artifacts.

I have a ref video to which the captured video has to be compared. The videos will be captured from 2 different set-top boxes at the same instance of time. One would be running an artifact-free video and the other set-top box will be put to test in comparison to the stable one. External conditions need not be taken into account.

One way to do this would be to break both the videos into frames and then compare each frame. I do not want to do that, since it would be a very lengthy process when I have high resolution videos of 60fps.

How can I do this using , and in ?

Upvotes: 5

Views: 11201

Answers (5)

Messypuddle
Messypuddle

Reputation: 429

The question is still a bit vague, but there a number of ways to try and compare videos for artifacts. It is also significantly easier if they have the same resolution, fps, etc.

  1. If you subtract frame[N] of one video from frame[N] of the other, (you may need to rescale to account for negative values) the resulting frame should be near zero in all pixels. Proper colormapping will allow you to play the resulting video and 'artifacts' will be noticeable with color indicating the magnitude of the variation.

  2. If your computer can handle it, plot pixel values from one video on one axis (x-axis) and matching pixel values on another axis (y-axis). The closer the plot falls on the x=y line the more similar the videos. In this case artifacts will typically appear as deviations from the x=y line of greater magnitude than the 'noise' one might expect from videos that are similar but not exactly the same. Then you just need to find the indexes for the deviations above the threshold you set to define 'artifacts'

  3. Similar to the method 2 above. Pixel values can be compared via correlation. If correlation is plotted in higher dimensions, this method will also give the location of artifacts.

There are a number of methods to transform your videos into multidimensional arrays depending on what comparisons you would like to perform.

Upvotes: 1

LSerni
LSerni

Reputation: 57408

Open both videos.

  • with N depending on FPS and video "speed" (car chase video: N=1; slow pans over green landscapes: N maybe 5 or even 10)
  • get three frames at "reasonable" intervals (say 5 seconds) in the middle of 1st video
  • for every N-th frame of the second video at position pos
    • compare the PSNR of this frame with frame A, B and C
    • if the PSNR is less than MinPSNR[A, B or C].PSNR then
      • MinPSNR[A, B or C] = (PSNR, pos)

If at least two of the three PSNRs.POS are separated by about 5 seconds, there are good chances that the videos are duplicated, and you know how to sync them: so that FirstVideo.pos[A, B or C] = MinPSNR[A, B or C].pos.

At this point you can start comparing synced frames one by one, looking for artifacts.

Upvotes: 1

Rajeev Ranjan
Rajeev Ranjan

Reputation: 11

According to PSNR example You are running both video at same time and comparing Each frame(current point of time) of both video.But what if 1stmin of first video content is matching with 2nd or 3rd min of second video content. PSNR will not fit for such case . You can go by "Video frame-matching algorithm using dynamic programming" http://electronicimaging.spiedigitallibrary.org/article.aspx?articleid=1100207

Upvotes: 1

CAta.RAy
CAta.RAy

Reputation: 514

You can check the example for c++ Video Input with OpenCV and similarity measurement

This provides a reference for what you are looking for. I am not very familiar with Python but since opencv keeps the funtionality the same I think yiu can extrapolate from the c++ example.

The most common algorithm used for this is the PSNR (aka Peak signal-to-noise ratio).

double getPSNR(const Mat& I1, const Mat& I2)
{
 Mat s1;
 absdiff(I1, I2, s1);       // |I1 - I2|
 s1.convertTo(s1, CV_32F);  // cannot make a square on 8 bits
 s1 = s1.mul(s1);           // |I1 - I2|^2

 Scalar s = sum(s1);         // sum elements per channel

 double sse = s.val[0] + s.val[1] + s.val[2]; // sum channels

 if( sse <= 1e-10) // for small values return zero
     return 0;
 else
 {
     double  mse =sse /(double)(I1.channels() * I1.total());
     double psnr = 10.0*log10((255*255)/mse);
     return psnr;
 }
}

but if you desire a structural similarity ayou can use the OpenCV implementation below.

 Scalar getMSSIM( const Mat& i1, const Mat& i2)
{
 const double C1 = 6.5025, C2 = 58.5225;
 /***************************** INITS **********************************/
 int d     = CV_32F;

 Mat I1, I2;
 i1.convertTo(I1, d);           // cannot calculate on one byte large values
 i2.convertTo(I2, d);

 Mat I2_2   = I2.mul(I2);        // I2^2
 Mat I1_2   = I1.mul(I1);        // I1^2
 Mat I1_I2  = I1.mul(I2);        // I1 * I2

 /***********************PRELIMINARY COMPUTING ******************************/

 Mat mu1, mu2;   //
 GaussianBlur(I1, mu1, Size(11, 11), 1.5);
 GaussianBlur(I2, mu2, Size(11, 11), 1.5);

 Mat mu1_2   =   mu1.mul(mu1);
 Mat mu2_2   =   mu2.mul(mu2);
 Mat mu1_mu2 =   mu1.mul(mu2);

 Mat sigma1_2, sigma2_2, sigma12;

 GaussianBlur(I1_2, sigma1_2, Size(11, 11), 1.5);
 sigma1_2 -= mu1_2;

 GaussianBlur(I2_2, sigma2_2, Size(11, 11), 1.5);
 sigma2_2 -= mu2_2;

 GaussianBlur(I1_I2, sigma12, Size(11, 11), 1.5);
 sigma12 -= mu1_mu2;

 ///////////////////////////////// FORMULA ////////////////////////////////
 Mat t1, t2, t3;

 t1 = 2 * mu1_mu2 + C1;
 t2 = 2 * sigma12 + C2;
 t3 = t1.mul(t2);              // t3 = ((2*mu1_mu2 + C1).*(2*sigma12 + C2))

 t1 = mu1_2 + mu2_2 + C1;
 t2 = sigma1_2 + sigma2_2 + C2;
 t1 = t1.mul(t2);               // t1 =((mu1_2 + mu2_2 + C1).*(sigma1_2 + sigma2_2 + C2))

 Mat ssim_map;
 divide(t3, t1, ssim_map);      // ssim_map =  t3./t1;

 Scalar mssim = mean( ssim_map ); // mssim = average of ssim map
 return mssim;
}

It is important to note that since you are comparing frame by frame (2 images) you have to loop through the video to get the corresponding pair.

Upvotes: 3

Mick
Mick

Reputation: 25481

If you mean that they are exactly the same (i.e. same format, same file type etc) then the easiest way is a simple file comparison - i.e. just compare each file byte by byte.

It also the only sure test - for example they may be nearly identical but one has some corrupted bytes half way through.

This type of byte by byte comparison will be much simpler than trying to decode and interpret the many, many different video formats that exist.

Upvotes: 1

Related Questions