Reputation: 115
I am struggling to read raw RGB image stream from buffer, converting to openCV (numpy) array and encoding back to .jpg Bytes without using PIL. I can achieve this using PIL, but can't figure out how to achieve the same just using numpy/opencv:
Reading in RBG raw stream with PIL (works):
PILFr = Image.frombuffer('RGB',(768,576),buf,"raw",'RGB',0,1)
opencvPILFr = np.array(PILFr)
Attempting a similar read with openCV, I have:
FlatNp=np.frombuffer(buf,dtype=np.uint8, count=-1)
opencvPILFr=np.resize(FlatNp,(576,768))
re-encoding to buffer (self.frame) as JPEG bytes stream with PIL (works):
PILFr=Image.fromarray(cv2.cvtColor(opencvPILFr, cv2.COLOR_BGR2RGB))
tmp = io.BytesIO()
PILFr.save(tmp,format='jpeg', quality=100)
self.frame = tmp.getvalue()
Attempting to re-encode with openCV (does not work!):
tmp = io.BytesIO()
tmp=cv2.imencode(".jpg",opencvPILFr) #this returns a tuple? not numpy array?
self.frame=tmp.tobytes()
Help appreciated -I can see some reasons why the above WOULDN'T work...but can't see how to fix it.
Upvotes: 2
Views: 2352
Reputation: 115
I figured this out. Further improvements welcome. Still not entirely sure I understand the shape and contents of the imencode returned tuple.
See below my working numpy/opencv alternative to the PIL approach above:
Read in RBG 24bit image from buffer and convert to numpy array (opencvFr) for openCV:
FlatNp=np.frombuffer(buf,dtype=np.uint8, count=-1)
opencvFr=np.resize(FlatNp,(576,768,3))
Encoding the edited numpy image array (opencvFr) as JPEG bytes:
tmp=cv2.imencode('.jpg',opencvFr) #this returns a tuple.
self.frame=tmp[1].tobytes() #bytes containing compressed JPEG image
(I am experimenting with this for a streaming camera application)
Upvotes: 1