Mr. Ace
Mr. Ace

Reputation: 375

Tensorflow: TFLiteConverter (Saved Model -> TFLite) requires all operands and results to have compatible element types

I have been stuck with this issue for a few days now, but when I am trying to convert my saved_model.pb file to a .tflite model, using the code below, it gives an error (stack trace below).


Conversion Code:

converter = tf.lite.TFLiteConverter.from_saved_model(
    "/tmp/test_saved_model2")
converter.optimizations = [tf.lite.Optimize.DEFAULT]
quantized_model = converter.convert()
open("converted_model.tflite", "wb").write(quantized_model)

Stacktrace:

Traceback (most recent call last):
  File "C:\Users\Mr.Ace\AppData\Roaming\Python\Python38\site-packages\tensorflow\lite\python\convert.py", line 196, in toco_convert_protos
    model_str = wrap_toco.wrapped_toco_convert(model_flags_str,
  File "C:\Users\Mr.Ace\AppData\Roaming\Python\Python38\site-packages\tensorflow\lite\python\wrap_toco.py", line 32, in wrapped_toco_convert
    return _pywrap_toco_api.TocoConvert(
Exception: <unknown>:0: error: loc("Func/StatefulPartitionedCall/input/_0"): requires all operands and results to have compatible element types
<unknown>:0: note: loc("Func/StatefulPartitionedCall/input/_0"): see current operation: %1 = "tf.Identity"(%arg0) {device = ""} : (tensor<1x?x?x3x!tf.quint8>) -> tensor<1x?x?x3xui8>


During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "c:/Data/TFOD/tflite_converter.py", line 27, in <module>
    quantized_model = converter.convert()
  File "C:\Users\Mr.Ace\AppData\Roaming\Python\Python38\site-packages\tensorflow\lite\python\lite.py", line 1076, in convert
    return super(TFLiteConverterV2, self).convert()
  File "C:\Users\Mr.Ace\AppData\Roaming\Python\Python38\site-packages\tensorflow\lite\python\lite.py", line 899, in convert
    return super(TFLiteFrozenGraphConverterV2,
  File "C:\Users\Mr.Ace\AppData\Roaming\Python\Python38\site-packages\tensorflow\lite\python\lite.py", line 629, in convert
    result = _toco_convert_impl(
  File "C:\Users\Mr.Ace\AppData\Roaming\Python\Python38\site-packages\tensorflow\lite\python\convert.py", line 569, in toco_convert_impl
    data = toco_convert_protos(
  File "C:\Users\Mr.Ace\AppData\Roaming\Python\Python38\site-packages\tensorflow\lite\python\convert.py", line 202, in toco_convert_protos
    raise ConverterError(str(e))
tensorflow.lite.python.convert.ConverterError: <unknown>:0: error: loc("Func/StatefulPartitionedCall/input/_0"): requires all operands and results to have compatible element types
<unknown>:0: note: loc("Func/StatefulPartitionedCall/input/_0"): see current operation: %1 = "tf.Identity"(%arg0) {device = ""} : (tensor<1x?x?x3x!tf.quint8>) -> tensor<1x?x?x3xui8>

I have tried using tf-nightly, and although it works, it doesn't create a "FlatBuffer" model which I need for using it on an Android Phone. How can I solve this problem?

Upvotes: 1

Views: 2469

Answers (2)

Alex K.
Alex K.

Reputation: 861

I see 2 points where you should pay attention to:

  1. (tensor<1x?x?x3x!tf.quint8>) -> tensor<1x?x?x3xui8>

Looks like your model has dynamic shape, and tflite does not work well with them. First of all convert your model from saved_model to tflite with fixed input. See good way here e.g.:

tflite_convert \
  --saved_model_dir="/tmp/test_saved_model2" \
  --output_file='model.tflite' \
  --input_shapes=1,256,256,3 \     # <-- here, you set an
                                  #     arbitrary valid shape
  --input_arrays='input' \         
  --output_arrays='Softmax'

Another way is to make your saved_model with fixed input-output shape so you do not need to specify it during saved_model->lite conversion. And this is the only option for TF2

  1. converter.optimizations = [tf.lite.Optimize.DEFAULT]

During debugging process try to avoid any kind of optimizations so you will have less places to search for your bug. This is general idea.

Upvotes: 2

I faced the same issue before, and now can train tflite model with following 3 steps:

  1. training the data

    !python /content/models/research/object_detection/model_main_tf2.py
    --pipeline_config_path={pipeline_config_path}
    --model_dir={model_dir}
    --alsologtostderr
    --num_train_steps={num_steps}
    --sample_1_of_n_eval_examples=1
    --num_eval_steps={num_eval_steps}

  2. export tflite 2 graph:

    !python models/research/object_detection/export_tflite_graph_tf2.py
    --pipeline_config_path={pipeline_config_path}
    --trained_checkpoint_dir={model_dir} --output_directory=tflite_exported

  3. convert .pb to tflite

    !tflite_convert --output_file 'model.tflite' --saved_model_dir 'tflite_exported/saved_model'

Upvotes: 0

Related Questions