Reputation: 7850
Hi, Following steps were taken
I trained yolo tiny on a custom data set with just one class
Converted .weights(darknet) to .h5 (keras) (verified and keras model is working fine as well)
Now when I convert Keras to core ml model I am not getting coordinates and confidence as outputs
Command used to convert to core ml
coremltools.converters.keras.convert(
'model_data/yolo.h5',
input_names='image',
class_labels=output_labels,
image_input_names='image',
input_name_shape_dict={'image': [None, 416, 416, 3]}
)
Though I have checked a third party Yolo model converted to core ml giving coordinates and confidence
refer the screenshot:
3rd party Yolo model converted to core ml
my Yolo model converted to core ml
Keras==2.1.5
coremltools==3.3
Upvotes: 0
Views: 1117
Reputation: 7850
I'll keep on updating this as it may be useful for others:
This will be very specific to the scenario where you have custom darknet weight which detects custom objects in a scene.
Typical follow for this will be:
Hope this will be helpful. If need more help drop your request in comments
Regards Ankit
Upvotes: 0
Reputation: 7892
Don't add this: class_labels=output_labels
-- It will make your Core ML model into a classifier, which are treated special in Core ML. Since your model is an object detector, you don't want this.
Look here for the rest: https://github.com/hollance/YOLO-CoreML-MPSNNGraph
Basically, you need to decode the bounding box coordinates yourself in Swift or Obj-C code. You can add this to the model too, but in my experience that is slower. (Here is a blog post that shows how to do this for SSD, which is similar but not exactly the same as YOLO.)
Upvotes: 2