Reputation: 1156
Some context:
I have a packed BGRA image in a buffer that I would like to convert to RGB.
I use the following code to convert it to RGB using OpenCV:
np_a = np.array( image_buffer ) #image_buffer is an array of uint8
rgb_a = cv2.cvtColor( image_buffer, cv2.COLOR_BGRA2RGB )
But:
OpenCV Error: Assertion failed (scn == 3 || scn == 4) in ipp_cvtColor,
file /home/username/opencv/opencv-3.1.0/modules/imgpro/src/color.cpp, line 7341
As OpenCV is open source, I have dived into the source code to figure out what happened.
static bool ipp_cvtColor( Mat &src, OutputArray _dst, int code, int dcn )
{
int stype = src.type();
int scn = CV_MAT_CN(stype), depth = CV_MAT_DEPTH(stype);
Mat dst;
Size sz = src.size();
switch( code )
{
#if IPP_VERSION_X100 >= 700
case CV_BGR2BGRA: case CV_RGB2BGRA: case CV_BGRA2BGR:
case CV_RGBA2BGR: case CV_RGB2BGR: case CV_BGRA2RGBA:
CV_Assert( scn == 3 || scn == 4 );
And:
#define CV_MAT_CN (flags) ((((flags) & CV_MAT_CN_MASK) >> CV_CN_SHIFT) + 1)
#define CV_MAT_CN_MASK ((CV_CN_MAX - 1) << CV_CN_SHIFT)
#define CV_CN_MAX 512
#define CV_CN_SHIFT 3
I am not sure to understand these lines of code.
I assume scn
is the "source channel number" and that it is related to the number of dimension of the array. The assertion would then fail because the array was created as a 1D array.
Indeed, print np_a.ndim
outputs 1
and print np_a.shape
outputs (422400,)
.
I tried many things. Among them, setting the array's shape manually with np_a.shape = (image_height, image_width)
, which ends with this error:
Program received signal SIGSEGV, Segmentation fault.
0x0000000000570558 in visit_decref ()
What am I missing?
Am I supposed to manually unpack the image before converting it? How?
FIRST EDIT:
The buffer is filled using a C API. It is supposed to be an array of UINT8.
Also, this:
print type( np_a )
print type( np_a[ 0 ] )
print np_a.shape
Outputs:
<type 'numpy.ndarray'>
<type 'numpy.uint8'>
(422400,)
SECOND EDIT:
The issue is already solved, this is only for a better understanding / another way.
Using:
np_a = np.array( image_buffer )
np_a_reshaped = np_a.reshape( height, width, depth )
np_a_converted = np_a_reshaped[ ...,:3 ][ ...,::-1 ]
print len( np_a_converted )
Outputs: 480.
So yes, I was probably using np_a.reshape( ... )
alone and assumed that it would change the shape of np_a. Why would you want to change the shape of the buffer and create a new variable?
However, the size of np_a_converted is still not correct. Indeed, later in the program, there is the following code:
img = wx.ImageFromBuffer( width, height, np_a_converted )
bmp = wx.Bitmap( img )
To create a wx.Bitmap from the buffer, without copy of the data.
From wx.ImageFromBuffer's documentation:
The dataBuffer object is expected to contain a series of RGB bytes and be width*height*3 bytes long.
And it gives this error:
File "/usr/local/lib/python2.7/dist-packages/wx/core.py", line 656, in ImageFromBuffer
img.SetDataBuffer(dataBuffer)
ValueError: Invalid data buffer size.
Upvotes: 4
Views: 7835
Reputation: 97671
If your buffer is 8-bit "packed", then all you're missing is a reshape
:
image = image_buffer.reshape(height, width, 4)
rgb = cv2.cvtColor(image, cv2.COLOR_BGRA2RGB)
It's not clear to me what BGRA2RGB
does here - there's no "right" way to remove an alpha channel without choosing a background color. If the alpha data is garbage, you can go with the simpler
rgb = image[...,:3][...,::-1]
To ignore the alpha channel, and then flip the byte order. This is O(w*h)
times faster than using opencv!
Note that if you plan to pass this array back to opencv, you might need to add:
rgb = np.copy(rgb)
Which makes the data contiguous in memory, a requirement of some opencv functions. This obviously loses you the efficiency gain mentioned above.
Upvotes: 4
Reputation: 1156
I found how to fix the issues:
The segmentation fault was due to a wrong formula for the buffer size.
I then used np_a.shape = (image_height, image_width, image_depth)
to set the buffer structure to a 4 channels image (the assertion had failed because the buffer was read as an array of 1 dimension).
Indeed, now, print np_a.shape
outputs (480, 640, 4)
.
Somehow, np_a.reshape( image_height, image_width, image_depth )
proposed by Eric does not work.
Upvotes: 0
Reputation: 4552
As you said that you are also using the C API, here is how you would do that in C++.
Let us assume that you have your BGRA data in 8 bit precision stored in a uchar* buffer
.
Then all you have to do is cast this buffer to a Vec4b*
like this :
Vec4b* new_buffer = (Vec4b*) buffer;
Then create your image like this:
cv::Mat image(height, width, CV_8UC4, new_buffer);
You can then apply the cvtColor
function.
cv::cvtColor(image, destination, CV_BGRA2BGR) ;
EDIT:
Actually, you don't even need the cast. You can pass the data to the constructor directly:
cv::Mat image(height, width, CV_8UC4, buffer);
Upvotes: 0