cheezeItz
cheezeItz

Reputation: 81

Keras model to Coreml and using OpenCV

I have a Keras model and have converted it to Coreml successfully. I am passing a colored 50x50 image in RGB format to the model, everything works in my Keras model in Python. However I am really struggling to get the same results in from the Coreml model. I am using OpenCV in my iOS app and need to convert a cv::Mat to a CVPixelBufferRef. I am positive something is not right with my input, but I cannot figure out what it is. The preprocessing for the input that I send into the Python model looks like this

image = cv2.resize(image, (50, 50)) image = image.astype("float") / 255.0 image = img_to_array(image) image = np.expand_dims(image, axis=0)

Any help would be appreciated. Below is the conversion from Keras to Coreml along with its output, and the function to convert a cv::Mat to CVPixelBufferRef (the image here is already resized to 50x50).

Keras to Coreml conversion

coreml_model = coremltools.converters.keras.convert(model, input_names='image', image_input_names='image', output_names='output', class_labels=output_labels, image_scale=1/255.0)

Output of the Keras to Coreml python script

OpenCV Mat to CVPixelBufferRef

int width = 50;//frame.cols;
int height = 50;//frame.rows;

NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
                         kCVPixelBufferCGBitmapContextCompatibilityKey,
                         [NSNumber numberWithInt:width], kCVPixelBufferWidthKey,
                         [NSNumber numberWithInt:height], kCVPixelBufferHeightKey,
                         nil];

CVPixelBufferRef imageBuffer;
CVReturn status = CVPixelBufferCreate(kCFAllocatorMalloc, width, height, kCVPixelFormatType_32BGRA, (CFDictionaryRef) CFBridgingRetain(options), &imageBuffer);

NSParameterAssert(status == kCVReturnSuccess && imageBuffer != NULL);

CVPixelBufferLockBaseAddress(imageBuffer, 0);
void *base = CVPixelBufferGetBaseAddress(imageBuffer) ;
memcpy(base, frame.data, frame.total()*4);
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);

return imageBuffer;

Upvotes: 1

Views: 1238

Answers (1)

Qin Heyang
Qin Heyang

Reputation: 1674

If you are trying to load an image with OpenCV and feed it into a Keras model, you need to be extra careful as Keras by default uses PIL to load the image when training. The problem is that PIL loads the image as RGB format whereas OpenCV loads image as BGR format. So if you directly feed the OpenCV image to Keras, you won't get any error but your result will be totally wrong.

As for the solution to this, in Python you can simply use

img[...,[0,2]]=img[...,[2,0]]

to convert a 3-channel image file between OpenCV format and PIL format.

Upvotes: 1

Related Questions