Reputation: 111
I am working on video processing project to detect fore ground objects. Below is a part of my code which is used to separate foreground and background.
#include "opencv2/core/core.hpp"
#include "opencv2/video/background_segm.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <stdio.h>
using namespace std;
using namespace cv;
//this is a sample for foreground detection functions
int main(int argc, const char** argv)
{
VideoCapture cap;
bool update_bg_model = true;
cap.open(0);
if( !cap.isOpened() )
{
printf("can not open camera or video file\n");
return -1;
}
namedWindow("image", CV_WINDOW_NORMAL);
namedWindow("foreground mask", CV_WINDOW_NORMAL);
namedWindow("foreground image", CV_WINDOW_NORMAL);
namedWindow("mean background image", CV_WINDOW_NORMAL);
BackgroundSubtractorMOG2 bg_model;
Mat img, fgmask, fgimg;
for(;;)
{
cap >> img;
if( img.empty() )
break;
if( fgimg.empty() )
fgimg.create(img.size(), img.type());
//update the model
bg_model(img, fgmask, update_bg_model ? -1 : 0);
fgimg = Scalar::all(0);
img.copyTo(fgimg, fgmask);
Mat bgimg;
bg_model.getBackgroundImage(bgimg);
imshow("image", img);
imshow("foreground mask", fgmask);
imshow("foreground image", fgimg);
if(!bgimg.empty())
imshow("mean background image", bgimg );
char k = (char)waitKey(30);
if( k == 27 ) break;
if( k == ' ' )
{
update_bg_model = !update_bg_model;
if(update_bg_model)
printf("Background update is on\n");
else
printf("Background update is off\n");
}
}
return 0;
}
In the foreground mask window I am getting lot of noise along with the actual fore ground object. Also the fulll object is note recognized as foreground. I need to bound the fore ground objects with rectangles as well. Wil BoundRect() do the job if I draw contours around in the foreground mask?...Also what are the most recommended parameter to be passed while finding contours(findcontour()) and for the BoundRect function...thanks in advance
Upvotes: 0
Views: 1459
Reputation: 1254
Too late to answer, but I hope this helps someone else.
Separating foreground from background in videos (without any constraint on the background) in a pixel perfect manner is a very difficult problem. A lot of research work has gone into this field and there is still scope. So a simple mixture of gaussians (as is been used by the BackgroundSubtractorMOG2) may not give you VERY accurate results. The noise is almost inevitable since the MOG's decision is based on colour cues and it's possible that some pixels in the background fit the gaussian models made by it.
These pixels you get as foreground are effectively representing change. Hence if you tinker with the learning rate of the background model, you can closely track the pixels which are moving. If you can work under the assumption that your background is fairly static, the moving pixels will represent your foreground and can help you solve the problem to some extent.
I also suggest using the BackgroundSubtractorGMG function in openCV. This function learns a background model from the first few (the number can be set) frames. If it is possible, let these first few frames be sans foreground. You may achieve decent results.
Upvotes: 2