ajay_nasa
ajay_nasa

Reputation: 2298

Convert VNCoreMLFeatureValueObservations to VNDetectedObjectObservation

I have exported YOLOV5 model, but the output configuration is in VNCoreMLFeatureValueObservations instead of VNDetectedObjectObservation.

Output configuration:

[<VNCoreMLFeatureValueObservation: 0x282f19980> 4FC4A8B2-A967-4CC7-8A86-E16863258F1B requestRevision=1 confidence=1.000000 "2308" - "MultiArray : Float32 1 x 3 x 20 x 20 x 85 array" (1.000000), <VNCoreMLFeatureValueObservation: 0x282f18a20> DA7269E9-BE2D-4A50-B5F9-99D3153CB0E7 requestRevision=1 confidence=1.000000 "2327" - "MultiArray : Float32 1 x 3 x 40 x 40 x 85 array" (1.000000), <VNCoreMLFeatureValueObservation: 0x282f18c60> 88211394-85CE-492E-81FC-5639E82B3416 requestRevision=1 confidence=1.000000 "2346" - "MultiArray : Float32 1 x 3 x 80 x 80 x 85 array" (1.000000)]

So, my question is what information does this VNCoreMLFeatureValueObservation MultiArray hold (is it something like a UIImage or CGRect?, or something different?) and how can I convert this Multidimensional Array into a useful set of data that I can actually use?

Upvotes: 0

Views: 556

Answers (1)

Matthijs Hollemans
Matthijs Hollemans

Reputation: 7892

You need to turn your YOLO model into a pipeline that has a NMS module at the end. Now Core ML / Vision will treat the model as an object detector.

See also my blog post: https://machinethink.net/blog/mobilenet-ssdlite-coreml/

Upvotes: 2

Related Questions