Reputation: 83
I'm working to convert a frozen inference graph I obtained from running this notebook to a TFLite model.
I modified code I found from TensorFlow's documentation to do so, and ran it in Google Colab:
import tensorflow as tf
path = "/content/drive/MyDrive/real_frozen_inference_graph.pb"
input = ["image_tensor"]
output = ["detection_boxes", "detection_scores", "detection_multiclass_scores", "detection_classes", "num_detections", "raw_detection_boxes", "raw_detection_scores"]
# Convert the model
converter = tf.compat.v1.lite.TFLiteConverter.from_frozen_graph(path, input_arrays=input, output_arrays=output)
print("starting conversion")
tflite_model = converter.convert()
print("done converting")
# Save the model.
with open('/content/drive/MyDrive/model.tflite', 'wb') as f:
f.write(tflite_model)
The runtime disconnects almost immediately after I run the cell, which is especially odd since I pay for colab pro...
I changed the code above to run on my local dev environment, but still no luck. I get a long string of raw binary code and weird-looking things like this printed to my stdout but no file is written by the termination of the script:
%220 = "tfl.add"(%219, %cst_12) {fused_activation_function = "NONE"} : (tensor<?x100xf32>, tensor<f32>) -> tensor<?x100xf32>
%221 = "tf.TensorArraySizeV3"(%handle_170, %210#5) {_class = ["loc:@Postprocessor/BatchMultiClassNonMaxSuppression/map/TensorArray_12"], device = ""} : (tensor<2x!tf.resource<tensor<*xf32>>>, tensor<f32>) -> tensor<i32>
%222 = "tfl.range"(%cst_10, %221, %cst_13) : (tensor<i32>, tensor<i32>, tensor<i32>) -> tensor<?xi32>
%223 = "tf.TensorArrayGatherV3"(%handle_170, %222, %210#5) {_class = ["loc:@Postprocessor/BatchMultiClassNonMaxSuppression/map/TensorArray_12"], device = "", element_shape = #tf.shape<100x2>} : (tensor<2x!tf.resource<tensor<*xf32>>>, tensor<?xi32>, tensor<f32>) -> tensor<?x100x2xf32>
%224 = "tf.TensorArraySizeV3"(%handle_172, %210#6) {_class = ["loc:@Postprocessor/BatchMultiClassNonMaxSuppression/map/TensorArray_13"], device = ""} : (tensor<2x!tf.resource<tensor<*xi32>>>, tensor<f32>) -> tensor<i32>
%225 = "tfl.range"(%cst_10, %224, %cst_13) : (tensor<i32>, tensor<i32>, tensor<i32>) -> tensor<?xi32>
%226 = "tf.TensorArrayGatherV3"(%handle_172, %225, %210#6) {_class = ["loc:@Postprocessor/BatchMultiClassNonMaxSuppression/map/TensorArray_13"], device = "", element_shape = #tf.shape<>} : (tensor<2x!tf.resource<tensor<*xi32>>>, tensor<?xi32>, tensor<f32>) -> tensor<?xi32>
%227 = "tfl.cast"(%226) : (tensor<?xi32>) -> tensor<?xf32>
"std.return"(%213, %216, %223, %220, %227, %181, %186) : (tensor<?x100x4xf32>, tensor<?x100xf32>, tensor<?x100x2xf32>, tensor<?x100xf32>, tensor<?xf32>, tensor<?x?x4xf32>, tensor<?x?x2xf32>) -> ()
}) {sym_name = "main", tf.entry_function = {control_outputs = "", inputs = "image_tensor", outputs = "detection_boxes,detection_scores,detection_multiclass_scores,detection_classes,num_detections,raw_detection_boxes,raw_detection_scores"}, type = (tensor<?x?x?x3x!tf.quint8>) -> (tensor<?x100x4xf32>, tensor<?x100xf32>, tensor<?x100x2xf32>, tensor<?x100xf32>, tensor<?xf32>, tensor<?x?x4xf32>, tensor<?x?x2xf32>)} : () -> ()
Does anyone have any ideas on what's going wrong here? Is it printing the TFLite file to my stdout? Maybe there's something obvious I'm overlooking? I'm new to TensorFlow so any help is appreciated.
Upvotes: 0
Views: 2394
Reputation: 861
This is output of convertor where conversion fail place indicated. This is native app and python wrapper just translate this output. This can be caused by various reasons: unimplemented ops, unimplemented inputs/outputs of ops, etc. Here how I would start:
Pick latest TF version you can (stable, try all steps, if not helped - nightly)
Add selected TF ops - this will expand amount of ops you can use. Apply and try convert:
converter.target_spec.supported_ops = [ tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops. tf.lite.OpsSet.SELECT_TF_OPS # enable TensorFlow ops. ]
Allow custom ops and investigate tflite model for them: converter.allow_custom_ops = True
. While you still will unable to inference model you will find what ops should be changed.
Upvotes: 1