sdk
sdk

Reputation: 229

Integrate OpenCV webcam into a Kivy user interface

My current program is in Python and uses OpenCV. I rely on webcam captures and I am processing every captured frame:

import cv2

# use the webcam
cap = cv2.VideoCapture(0)
while True:
    # read a frame from the webcam
    ret, img = cap.read()
    # transform image

I would like to make a Kivy interface (or another graphical user interface) with buttons, keeping already existing functionality with webcam captures.

I found this example: https://kivy.org/docs/examples/gen__camera__main__py.html — but it doesn’t explain how to acquire the webcam image to process it with OpenCV.

I found an older example: http://thezestyblogfarmer.blogspot.it/2013/10/kivy-python-script-for-capturing.html — it saves screenshots to disk using the ‘screenshot’ function. Then I can read the saved files and process them, but this seems to be an unnecessary step.

What else can I try?

Upvotes: 10

Views: 26990

Answers (2)

Cristian
Cristian

Reputation: 415

Found this example here: https://groups.google.com/forum/#!topic/kivy-users/N18DmblNWb0

It converts the opencv captures to kivy textures, so you can do every kind of cv transformations before displaying it to your kivy interface.

__author__ = 'bunkus'
from kivy.app import App
from kivy.uix.widget import Widget
from kivy.uix.boxlayout import BoxLayout
from kivy.uix.image import Image
from kivy.clock import Clock
from kivy.graphics.texture import Texture

import cv2

class CamApp(App):

    def build(self):
        self.img1=Image()
        layout = BoxLayout()
        layout.add_widget(self.img1)
        #opencv2 stuffs
        self.capture = cv2.VideoCapture(0)
        cv2.namedWindow("CV2 Image")
        Clock.schedule_interval(self.update, 1.0/33.0)
        return layout

    def update(self, dt):
        # display image from cam in opencv window
        ret, frame = self.capture.read()
        cv2.imshow("CV2 Image", frame)
        # convert it to texture
        buf1 = cv2.flip(frame, 0)
        buf = buf1.tostring()
        texture1 = Texture.create(size=(frame.shape[1], frame.shape[0]), colorfmt='bgr') 
        #if working on RASPBERRY PI, use colorfmt='rgba' here instead, but stick with "bgr" in blit_buffer. 
        texture1.blit_buffer(buf, colorfmt='bgr', bufferfmt='ubyte')
        # display image from the texture
        self.img1.texture = texture1

if __name__ == '__main__':
    CamApp().run()
    cv2.destroyAllWindows()

Upvotes: 22

Peter Badida
Peter Badida

Reputation: 12189

Note: I have no clue how OpenCV works, but I found camera_opencv.py, so this means there is an easy way how to work with it.

As you see in camera example, this is the default way and when you look in __init__.py for camera you can see opencv in providers so perhaps it works with OpenCV out of the box. Check log if you can see OpenCV detected as a provider. You should see CameraOpenCV written somewhere if it's detected and it should show itself when capturing image.

If you however want to work with OpenCV directly(i.e. cap.read() and similar stuff), then you need to write your own handler for the provider or append more options to the camera_opencv file.

Upvotes: 3

Related Questions