Jasar Orion
Jasar Orion

Reputation: 686

Background substractor python opencv ( remove granulation )

Hello in using MOG2 to make a Background substrator from a base frame to a next frames. but its showing me to much ruid

enter image description here

id like if there is another background substractor that can elimitate this ponts. Also i have another problem. When a car passes with flash lights on the flashlights is showed as white im mi image . i need to ignorate the reflexion of fleshlight in the ground.

Some one knows dow to do that ?

by cod for BGS:

backSub = cv2.createBackgroundSubtractorMOG2(history=1, varThreshold=150, detectShadows=True)
fgMask = backSub.apply(frame1)
fgMask2 = backSub.apply(actualframe)
maskedFrame = fgMask2 - fgMask
cv2.imshow("maskedFrame1 "+str(id), maskedFrame)

Upvotes: 3

Views: 1880

Answers (2)

Usama Hasan
Usama Hasan

Reputation: 11

You can use SuBSENSE: A Universal Change Detection Method With Local Adaptive Sensitivity https://ieeexplore.ieee.org/document/6975239.

BackgroundSubtractionSuBSENSE bgs(/*...*/);
bgs.initialize(/*...*/);
for(/*all frames in the video*/) {
    //...
    bgs(input,output);
    //...
}

You can find the complete implementation at https://bitbucket.org/pierre_luc_st_charles/subsense/src/master/

Plus I don't know the scale of your work, and your requirements. But Murari Mandal composed a very informative repository on GitHub comprising list of resources related to background subtraction, which can solve the above mentioned problems.

https://github.com/murari023/awesome-background-subtraction

Upvotes: 1

karlphillip
karlphillip

Reputation: 93410

You can try to perform a Gaussian blur before sending the frame to backSub.apply() or experiment with the parameters for cv2.createBackgroundSubtractorMOG2(): if you need a better explanation of what they do, try this page.

This is the result from a 7x7 Gaussian blur using this video.

Code:

import cv2
import numpy as np
import sys

# read input video
cap = cv2.VideoCapture('traffic.mp4')
if (cap.isOpened()== False):
    print("!!! Failed to open video")
    sys.exit(-1)

# retrieve input video frame size
frame_width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
frame_height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
fps = cap.get(cv2.CAP_PROP_FPS)
print('* Input Video settings:', frame_width, 'x', frame_height, '@', fps)

# adjust output video size
frame_height = int(frame_height / 2)
print('* Output Video settings:', frame_width, 'x', frame_height, '@', fps)

# create output video
video_out = cv2.VideoWriter('traffic_out.mp4', cv2.VideoWriter_fourcc(*'MP4V'), fps, (frame_width, frame_height))
#video_out = cv2.VideoWriter('traffic_out.avi', cv2.VideoWriter_fourcc('M','J','P','G'), fps, (frame_width, frame_height), True)

# create MOG
backSub = cv2.createBackgroundSubtractorMOG2(history=5, varThreshold=60, detectShadows=True)

while (True):
    # retrieve frame from the video
    ret, frame = cap.read() # 3-channels
    if (frame is None):
        break

    # resize to 50% of its original size
    frame = cv2.resize(frame, None, fx=0.5, fy=0.5)

    # gaussian blur helps to remove noise
    blur = cv2.GaussianBlur(frame, (7,7), 0)
    #cv2.imshow('frame_blur', blur)

    # subtract background
    fgmask = backSub.apply(blur) # single channel
    #cv2.imshow('fgmask', fgmask)

    # concatenate both frames horizontally and write it as output
    fgmask_bgr = cv2.cvtColor(fgmask, cv2.COLOR_GRAY2BGR) # convert single channel image to 3-channels
    out_frame = cv2.hconcat([blur, fgmask_bgr]) # 
    #print('output=', out_frame.shape) # shape=(360, 1280, 3)

    cv2.imshow('output', out_frame)
    video_out.write(out_frame)

    # quick pause to display the windows
    if (cv2.waitKey(1) == 27):
        break
    
# release resources
cap.release()
video_out.release()
cv2.destroyAllWindows()

Upvotes: 3

Related Questions