SirTapir
SirTapir

Reputation: 3

Invalid output Tensor index: 1 when running a custom yolov3-tiny model on Google's TFLite Object Detection example

I'm facing an error when trying to run a tiny-yolov3 model on TensorFlow Lite's Object Detection Android Demo. When I try to run the app on mobile phone, the app crashed with the following error

E/AndroidRuntime: FATAL EXCEPTION: inference
    Process: org.tensorflow.lite.examples.detection, PID: 5535
    java.lang.IllegalArgumentException: Invalid output Tensor index: 1
        at org.tensorflow.lite.NativeInterpreterWrapper.getOutputTensor(NativeInterpreterWrapper.java:292)
        at org.tensorflow.lite.NativeInterpreterWrapper.run(NativeInterpreterWrapper.java:166)
        at org.tensorflow.lite.Interpreter.runForMultipleInputsOutputs(Interpreter.java:314)
        at org.tensorflow.lite.examples.detection.tflite.TFLiteObjectDetectionAPIModel.recognizeImage(TFLiteObjectDetectionAPIModel.java:204)
        at org.tensorflow.lite.examples.detection.DetectorActivity$2.run(DetectorActivity.java:181)
        at android.os.Handler.handleCallback(Handler.java:873)
        at android.os.Handler.dispatchMessage(Handler.java:99)
        at android.os.Looper.loop(Looper.java:214)
        at android.os.HandlerThread.run(HandlerThread.java:65)

Here's my tflite and labelfile.

I changed the following on DetectorActivity.java to avoid this error

TF_OD_API_INPUT_SIZE from 300 to 416
TF_OD_API_IS_QUANTIZED from true to false

Then I changed the following on TFLiteObjectDetectionAPIModel.java

NUM_DETECTIONS from 10 to 2535
d.outputLocations = new float[1][NUM_DETECTIONS][4] to d.outputLocations = new float[1][NUM_DETECTIONS][7];

Here's the DetectorActivity.java and TFLiteObjectDetectionAPIModel.java that I use

Here's my model .weight, cfg, and .pb if needed

Any assistance would be appreciated

Upvotes: 0

Views: 1337

Answers (1)

yyoon
yyoon

Reputation: 3845

I could reproduce the issue using your custom model and source code. Thanks for providing them.

The main issue is that your custom detect.tflite model has an output spec which is different from the one expected by the object detection example app.

You can see the difference using a model visualizer such as netron.

The original model used by the example app (mobilenet_ssd) looks like this:

enter image description here

As you can see, there are 4 scalar float32 outputs, which are essentially split from the final TFLite_Detection_PostProcess node.

enter image description here

On the other hand, your model has a single output tensor in [1,2535,7] shape.

enter image description here

So when the app's Java code runs tfLite.runForMultipleInputsOutputs(inputArray, outputMap), it tries to assign the multiple outputs based on what you put in the outputMap. However, because there is only one output tensor in your model, when it tries to assign the output at index 1 into outputClasses array, it fails with the error message.

I don't know enough details about the yolov3 model to help you with the exact command to use for converting the model, but this doc should give more detailed information on how the original model was converted.

Upvotes: 1

Related Questions