Reputation: 99
I have to deploy a machine learning model which will process the video coming from the user's camera and we have to process on the model's predictions. I want to give user the ability to control for how long the model will input/ take the video feed from the camera like some kind of button which can provide that service.
For now, I am able to do prediction on the video feed but it is continuous like for every frame and I am returning that frame to the front end with StreamingHttpResponse but the problem in StreamingHttpResponse is I don't have any idea on how to include any controls(stop, continue prediction) in the application.
I am open to suggestions if there is any other way to achieve this other than StreamingHttpResponse or if it is possible with StreamingHttpResponse - please guide me in the proper direction
view functions that allow the streaming capability
def gen(camera):
while True:
frame = cam.get_frame()
# print(frame)
m_image, lab =predict_video(frame, "result")
print(lab)
# m_image = cv2.cvtColor(m_image, cv2.COLOR_RGB2BGR)
ret, m_image = cv2.imencode('.jpg', m_image)
m_image = m_image.tobytes()
yield(b'--frame\r\n'
b'Content-Type: image/jpeg\r\n\r\n' + m_image + b'\r\n\r\n')
def livefeed(request):
try:
return StreamingHttpResponse(gen(VideoCamera()), content_type="multipart/x-mixed-replace;boundary=frame")
except Exception as e: # This is bad! replace it with proper handling
print(e)
predict_video is another function that I have wrote inside views.py and it returns the modified image(image with a bounding box around it) and the predicted label.
cam is an object of VideoCamera Class which I defined in another .py file and its definition is like this:
class VideoCamera(object):
def __init__(self):
self.video = cv2.VideoCapture(0)
(self.grabbed, self.frame) = self.video.read()
threading.Thread(target=self.update, args=()).start()
def __del__(self):
self.video.release()
def get_frame(self):
image = self.frame
# ret, jpeg = cv2.imencode('.jpg', image)
return image
def update(self):
while True:
(self.grabbed, self.frame) = self.video.read()
urls.py for the video part:
path('live/', views.livefeed, name="showlive"),
I have included the link to the 'live/' url in an img tag in the html like this:
<h3> This is the live feed </h3>
<img src="{% url 'live' %}">
Upvotes: 0
Views: 1399
Reputation: 658
As you mentioned in the comments, you require 30 frames per second video processing, you need WebRTC for getting user camera video frames from browser/mobile and send them to your backend.
There are various WebRTC implementations out there but for Python, you can use aiortc.
aiortc is a library for Web Real-Time Communication (WebRTC) and Object Real-Time Communication (ORTC) in Python. It is built on top of asyncio, Python's standard asynchronous I/O framework.
For your use case which you want to process video in Real-time using opencv
there is an exact example in the project repository, checkout aiortc server example
This example illustrates establishing audio, video, and a data channel with a browser. It also performs some image processing on the video frames using OpenCV.
And at last, if you're using synchronous Django (anything before 3) it's not feasible to use Django for this service, and you should consider async frameworks like Django3
, FastAPI
, Starlette
or ...
Upvotes: 2