Angel Lopez
Angel Lopez

Reputation: 49

Using "with" inside a thread in python + picamera + opencv

I am using the raspberry pi with the picamera and opencv python modules trying to do some rapid capture and processing. Currently I am using the recipe in http://picamera.readthedocs.org/en/latest/recipes2.html#rapid-capture-and-processing to capture each image to a BytesIO stream. Then I have added the code inside the ImageProccessor class to convert each stream to an opencv object and do some analysis "on the fly".

My current code threfore looks something like:

import io
import time
import threading
import picamera
import cv2
import picamera.array
import numpy as np


# Create a pool of image processors
done = False
lock = threading.Lock()
pool = []

class ImageProcessor(threading.Thread):
    def __init__(self):
        super(ImageProcessor, self).__init__()
        self.stream = io.BytesIO()
        self.event = threading.Event()
        self.terminated = False
        self.start()

    def run(self):
        # This method runs in a separate thread
        global done
        while not self.terminated:
            # Wait for an image to be written to the stream
            if self.event.wait(1):
                try:
                    self.stream.seek(0)
                    # Read the image and do some processing on it
                    # Construct a numpy array from the stream
                    data = np.fromstring(self.stream.getvalue(), dtype=np.uint8)
                    # "Decode" the image from the array, preserving colour
                    image = cv2.imdecode(data, 1)

                    # Here goes more opencv code doing image proccessing

                    # Set done to True if you want the script to terminate
                    # at some point
                    #done=True
                finally:
                    # Reset the stream and event
                    self.stream.seek(0)
                    self.stream.truncate()
                    self.event.clear()
                    # Return ourselves to the pool
                    with lock:
                        pool.append(self)

def streams():
    while not done:
        with lock:
            if pool:
                processor = pool.pop()
            else:
                processor = None
        if processor:
            yield processor.stream
            processor.event.set()
        else:
            # When the pool is starved, wait a while for it to refill
            print ("Waiting")            
            time.sleep(0.1)

with picamera.PiCamera() as camera:
    pool = [ImageProcessor() for i in range(4)]
    camera.resolution = (640, 480)
    camera.framerate = 30
    camera.start_preview()
    time.sleep(2)
    camera.capture_sequence(streams(), use_video_port=True)

# Shut down the processors in an orderly fashion
while pool:
    with lock:
        processor = pool.pop()
    processor.terminated = True
    processor.join()

The problem is that this involves JPEG encoding and decoding of each image which is lossy and time consuming. The suggested alternative is capturing to a picamera.array: http://picamera.readthedocs.org/en/latest/recipes1.html#capturing-to-an-opencv-object , for a single image the code:

import time
import picamera
import picamera.array
import cv2

with picamera.PiCamera() as camera:
    camera.start_preview()
    time.sleep(2)
    with picamera.array.PiRGBArray(camera) as stream:
        camera.capture(stream, format='bgr')
        # At this point the image is available as stream.array
        image = stream.array

which works great but I do not know how to combine these two pieces of code so that the ImageProcessor class defines a picamera.array instead of a BytesIO stream. The need to use a "with" statement to generate the stream for the picamera.array confuses me (I am new to python... ;) ). Thanks for any pointers. Angel

Upvotes: 4

Views: 2614

Answers (1)

Cam Phillips
Cam Phillips

Reputation: 31

I found that you can just reference the source from picamera module.

def raw_resolution(resolution):
    """
    Round a (width, height) tuple up to the nearest multiple of 32 horizontally
    and 16 vertically (as this is what the Pi's camera module does for
    unencoded output).
    """
    width, height = resolution
    fwidth = (width + 31) // 32 * 32
    fheight = (height + 15) // 16 * 16
    return fwidth, fheight


def bytes_to_rgb(data, resolution):
    """
    Converts a bytes objects containing RGB/BGR data to a `numpy`_ array.
    """
    width, height = resolution
    fwidth, fheight = raw_resolution(resolution)
    if len(data) != (fwidth * fheight * 3):
        raise PiCameraValueError(
            'Incorrect buffer length for resolution %dx%d' % (width, height))
    # Crop to the actual resolution
    return np.frombuffer(data, dtype=np.uint8).\
            reshape((fheight, fwidth, 3))[:height, :width, :]

You can convert by calling

image = bytes_to_rgb(self.stream.getvalue(),resolution)

where resolution is (width,height). The reason camera is passed to PiRGBArray is to be able to refer to the camera's resolution.

Upvotes: 2

Related Questions