Reputation: 3
I am looking to compile this model as a TFLite model, but since the model utilises float64 operations I am running into issues with it. I hope someone has some insights here on whether it is at all possible to convert this to a TFLite model and if so how, I'll provide detailed steps so far below. Thank you
A basic tflite compile:
import tensorflow as tf
saved_model_dir = "model"
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
tflite_model = converter.convert()
# Save the converted model
with open('converted_model.tflite', 'wb') as f:
f.write(tflite_model)
Throws the following error:
W tensorflow/compiler/mlir/lite/flatbuffer_export.cc:3825] TFLite interpreter needs to link Flex delegate in order to run the model since it contains the following Select TFop(s):
Flex ops: FlexAddV2, FlexAll, FlexAny, FlexAssert, FlexAvgPool, FlexBatchMatMulV2, FlexBiasAdd, FlexCast, FlexComplexAbs, FlexConcatV2, FlexConv2D, FlexDepthwiseConv2dNative, FlexExpandDims, FlexFloorDiv, FlexFloorMod, FlexFusedBatchNormV3, FlexGatherV2, FlexGreater, FlexGreaterEqual, FlexIdentity, FlexIdentityN, FlexIsNan, FlexLog, FlexMatMul, FlexMax, FlexMaximum, FlexMean, FlexMinimum, FlexMul, FlexNeg, FlexPack, FlexPad, FlexPadV2, FlexRFFT, FlexRange, FlexRelu, FlexReshape, FlexRound, FlexSelectV2, FlexShape, FlexSigmoid, FlexSplitV, FlexSqueeze, FlexStopGradient, FlexStridedSlice, FlexSub, FlexTile, FlexTopKV2, FlexTranspose
After which I attempted to specifically set the TFOps with:
import tensorflow as tf
saved_model_dir = "model"
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS,
tf.lite.OpsSet.SELECT_TF_OPS
]
tflite_model = converter.convert()
with open("converted_model.tflite", "wb") as f:
f.write(tflite_model)
which surpresses most of the error messages and compiles the model as .tflite but some remain:
W tensorflow/compiler/mlir/lite/flatbuffer_export.cc:3825] TFLite interpreter needs to link Flex delegate in order to run the model since it contains the following Select TFop(s):
Flex ops: FlexMul, FlexRound, FlexSub
Details:
tf.Mul(tensor<f64>, tensor<f64>) -> (tensor<f64>) : {device = ""}
tf.Round(tensor<f64>) -> (tensor<f64>) : {device = ""}
tf.Sub(tensor<f64>, tensor<f64>) -> (tensor<f64>) : {device = ""}
I've tried to built the delegate in a clone of the tensorflow repository checked out at the version in my environment (2.18.0) trying (cleaning each time) bazelisk with various adaptions such as:
bazelisk build -c opt --config=macos --define=tflite_with_flex=true //tensorflow/lite/delegates/flex:tensorflowlite_flex
bazelisk build -c opt --config=macos //tensorflow/lite/delegates/flex:tensorflowlite_flex
bazelisk build -c opt --config=monolithic //tensorflow/lite/delegates/flex:tensorflowlite_flex
Regardless with these delegates I fail to load the TFLite model with:
import tensorflow as tf
flex_delegate_path = ".../libtensorflowlite_flex.dylib"
flex_delegate = tf.lite.experimental.load_delegate(flex_delegate_path)
interpreter = tf.lite.Interpreter(
model_path="converted_model.tflite",
experimental_delegates=[flex_delegate]
)
interpreter.allocate_tensors()
I get the following error:
2025-02-26 13:41:45.942087: W tensorflow/core/common_runtime/input_colocation_exemption_registry.cc:33] Input colocation exemption for op: IdentityN already registered
2025-02-26 13:41:45.947004: F tensorflow/core/framework/variant_op_registry.cc:76] Check failed: existing == nullptr (0x12b107918 vs. nullptr)Unary VariantDecodeFn for type_name: CompositeTensorVariant already registered
And the delegates I need seem to be empty:
nm .../tensorflow/bazel-bin/tensorflow/lite/delegates/flex/libtensorflowlite_flex.dylib | grep FlexMul
grep -r "FlexMul" tensorflow/lite/kernels/
grep -r "FlexRound" tensorflow/lite/kernels/
grep -r "FlexSub" tensorflow/lite/kernels/
All return nothing.
If anyone has any idea or suggestions, please let me know - thanks!
*UPDATE
This model does not actually contain any float64 operations in it's architecture:
_SignatureMap({'score': <ConcreteFunction (*, context_step_samples: TensorSpec(shape=(), dtype=tf.int64, name='context_step_samples'), waveform: TensorSpec(shape=(None, None, 1), dtype=tf.float32, name='waveform')) -> Dict[['score', TensorSpec(shape=(None, None, 12), dtype=tf.float32, name='score')]] at 0x16F037790>, 'metadata': <ConcreteFunction () -> Dict[['input_sample_rate', TensorSpec(shape=(), dtype=tf.int64, name='input_sample_rate')], ['class_names', TensorSpec(shape=(12,), dtype=tf.string, name='class_names')], ['context_width_samples', TensorSpec(shape=(), dtype=tf.int64, name='context_width_samples')]] at 0x1665E1270>, 'serving_default': <ConcreteFunction (*, context_step_samples: TensorSpec(shape=(), dtype=tf.int64, name='context_step_samples'), waveform: TensorSpec(shape=(None, None, 1), dtype=tf.float32, name='waveform')) -> Dict[['score', TensorSpec(shape=(None, None, 12), dtype=tf.float32, name='score')]] at 0x16F0377C0>, 'front_end': <ConcreteFunction (*, waveform: TensorSpec(shape=(None, None, 1), dtype=tf.float32, name='waveform')) -> Dict[['output_0', TensorSpec(shape=(None, None, 128), dtype=tf.float32, name='output_0')]] at 0x1695CBBE0>, 'features': <ConcreteFunction (*, spectrogram: TensorSpec(shape=(None, 128, 128), dtype=tf.float32, name='spectrogram')) -> Dict[['output_0', TensorSpec(shape=(None, 1280), dtype=tf.float32, name='output_0')]] at 0x16AF593C0>, 'logits': <ConcreteFunction (*, spectrogram: TensorSpec(shape=(None, 128, 128), dtype=tf.float32, name='spectrogram')) -> Dict[['output_0', TensorSpec(shape=(None, 12), dtype=tf.float32, name='output_0')]] at 0x169CD6D10>})
Not sure why they appear during the TFLite conversion...
Upvotes: 0
Views: 26
Reputation: 3
To answer this in case anyone has a similar issue in the future; it turns out the model I was trying to convert did not include any float64 operations in its architecture, but the problems were introduced in the additional processing layers compiled in the model. Once I loaded it and resaved the model as is, these processing layers were stripped off automatically and I was able to convert to TFLite without issues.
Now I just need to figure out what processing was performed in those layers to obtain the same results.
Upvotes: 0
Reputation: 1
I would suggest you convert your model's tensors from float64 to float32 since TFlite primarily supports float32.
Upvotes: 0