Reputation: 61
I have retrained and fine-tuned Inception_v3 using Keras(2.0.4) & Tensorflow(1.1.0). When I convert the Keras model to MLmodel with coremltools I get a model that requires an input of MultiArray . That makes sense if I understand that it is asking for [Height, Width, RGB] = (299,299,3). But I don't know how to convert the CVPixelBuffer to that Format.
Can someone please help me understand what preprocessing needs to take place for my re-trained incpetion model to work in coreml? Or what I need to do in the conversion so that it will accept the CVPixelBuffer?
Upvotes: 1
Views: 680
Reputation: 26
That's a very good question. It seems that pixelbuffer almost always is in BGRA and that's does not crash inception, classes are predicted quite fine, but the thing is that values and vectors are different, I bet that coreml does not convert BGRA to RGB and that channels are in wrong order. I cease to find any way to do that conversion in swift for pixelbuffer, please, let me know if it exists.
Upvotes: 1
Reputation: 61
I had retrained InceptionV3 but went back to look at my code. I did not set the input shape to 299,299 in keras. I forced all my photos to be that size in preprocessing. The result was that the Model-JSON did not contain the input dimensions but instead had the values: [null, null, null, 3] and the conversion to CoreML could not know that the input dims were supposed to be 299, 299. I was able to save the model weights, save the json string of the model, edit the json to have the proper inputs [null, 299, 299, 3], load the edited json string as the new model, load the weights, and viola! The coreML model now properly accepts Image
Upvotes: 1