Michael
Michael

Reputation: 141

"Model not quantized" even after post-training quantization

I downloaded a tensorflow model from Custom Vision and want to run it on a coral tpu. I therefore converted it to tensorflow-lite and applying hybrid post-training quantization (as far as I know that's the only way because I do not have access to the training data). You can see the code here: https://colab.research.google.com/drive/1uc2-Yb9Ths6lEPw6ngRpfdLAgBHMxICk When I then try to compile it for the edge tpu, I get the following:

    Edge TPU Compiler version 2.0.258810407
    INFO: Initialized TensorFlow Lite runtime.
    Invalid model: model.tflite
    Model not quantized

Any idea what my problem might be?

Upvotes: 2

Views: 3039

Answers (2)

nibeh
nibeh

Reputation: 153

tflite models are not fully quantized using converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]. You might have a look on post training full integer quantization using the representation dataset: https://www.tensorflow.org/lite/performance/post_training_quantization#full_integer_quantization_of_weights_and_activations Simply adapt your generator function to yield representative samples (e.g. similar images, to what your image classification network should predict). Very few images are enough for the converter to identify min and max values and quantize your model. However, typically your accuracy is less in comparison to quantization aware learning.

Upvotes: 3

Silfverstrom
Silfverstrom

Reputation: 29348

I can't find the source but I believe the edge TPU currently only supports 8bit-quantized models, and no hybrid operators.

EDIT: On Corals FAQ they mention that the model needs to be fully quantized.

You need to convert your model to TensorFlow Lite and it must be quantized using either quantization-aware training (recommended) or full integer post-training quantization.

Upvotes: 1

Related Questions