Reputation: 417
I have a Python app on a client PC which tests for movement using OpenCV. I want to send the OpenCV frame that is captured over RabbitMQ to a server, so the server can test for presence of a person in the frame.
I have it working, by converting the frame to jpg, base64 encoding the jpg, sending via the queue, and base64 decoding at the other end then saving the file. The jpg is then viewable on the server. I can then load the jpg into OpenCV on the server using cv2.imread('captured.jpg') and test for existence of a person.
I now want to avoid having to save the jpg to disk and laod it back into Python on the server. But I cant seem to get the message queue body content to load into OpenCV. Below is the client code to send the content, then the server code to process the content (minus the detectorAPI function which analyses the frame).
retval, image = camera.read()
retval, buffer = cv2.imencode('.jpg', image)
jpgb64 = base64.b64encode(buffer)
properties = pika.BasicProperties(app_id='motion', content_type='image/jpg', reply_to=self.ENVIRON["clientName"])
connection = pika.BlockingConnection(self.parameters )
channel = connection.channel()
channel.basic_publish(exchange='', routing_key='Central', body=jpgb64, properties=properties)
connection.close()
def callback(ch, method, properties, body):
imgbin = base64.b64decode(body)
with open('captured.jpg', 'wb') as f_output:
f_output.write(imgbin)
frame = cv2.imread('captured.jpg')
dt = detector.detectorAPI()
result = dt.objectCount(frame)
print(result)
I have tried not converting to jpg on the client and simply sending the OpenCV frame converted to base64. But then after decoding it on the server I cant get OpenCV to recognise it as a 'frame'. I assume an OpenCV frame is a special data type and I am just ending up with a binary object, which isnt the same. But that is just a guess, and I dont know how to fix that anyway.
How can I send the captured OpenCV frame from the client, so that OpenCV on the server will be able to process it as though it was a frame captured on the server itself?
Upvotes: 1
Views: 1231
Reputation: 207465
I don't use RabbitMQ, but I believe you can send binary data, so I don't know why you are base64-encoding your data to send it and decoding at the receiving end. You should be able to send the buffer
you get returned from cv2.imencode()
and it will be faster and simpler.
At the receiving end, you can convert the received message into a Numpy array and decode back into the original frame without going to disk like this:
# Convert received message into Numpy array
jpg = np.frombuffer(RECEIVEDMESSAGE, dtype=np.uint8)
# JPEG-decode back into original frame - which is actually a Numpy array
im = cv2.imdecode(jpg, cv2.IMREAD_UNCHANGED)
Upvotes: 1