wuhy08
wuhy08

Reputation: 361

tflite quantized inference very slow

I am trying to convert a trained model from checkpoint file to tflite. I am using tf.lite.LiteConverter. The float conversion went fine with reasonable inference speed. But the inference speed of the INT8 conversion is very slow. I tried to debug by feeding in a very small network. I found that inference speed for INT8 model is generally slower than float model.

In the INT8 tflite file, I found some tensors called ReadVariableOp, which doesn't exist in TensorFlow's official mobilenet tflite model.

I wonder what causes the slowness of INT8 inference.

Upvotes: 12

Views: 3931

Answers (2)

Mike B
Mike B

Reputation: 3416

There can be many reasons for this, some of the most common:

  1. Lack of INT8 instruction set architecture (ISA)

    You won't for example see INT8 model boosts compared to Float32 on Intel CPUs under 10th gen. This is because Intel CPUs < 10th gen don't have Intel DLBoost, a specific instruction set (ISA) architecture designed to improve performance of INT8 DL models. This ISA is present in Intel chips from 10th gen onwards. Most certainly, without a specific INT8 ISA the operations get upsampled to Float32.

  2. Float32 operations transform into verbose INT8 operations

    Some Float32 operations are not INT8 friendly, which leads to a very verbose INT8 counterpart. Potentially much slower that the original operation

Upvotes: 0

Charlie Qiu
Charlie Qiu

Reputation: 41

You possibly used x86 cpu instead of one with arm instructions. You can refer it here https://github.com/tensorflow/tensorflow/issues/21698#issuecomment-414764709

Upvotes: 3

Related Questions