david.ju321
david.ju321

Reputation: 65

Difference in output between TensorFlow and TF-Lite

I have a tensorflow model that I converted to tensorflow-lite. But, there is a deviation in inference accuracy. Is it a normal behaviour?

I found that the inference output is different after fourth decimal place between these two models.

Upvotes: 2

Views: 1175

Answers (2)

Chan Kha Vu
Chan Kha Vu

Reputation: 10400

On float32 precision

float32 is the default value type used in TensorFlow. Let's talk a bit about float32 type and the importance of order of operations. Basically, there is a neat table from this post that shows how the precision of a float changes as the magnitude increases:

Float Value     Float Precision 
1               1.19E-07        
10              9.54E-07        
100             7.63E-06        
1,000           6.10E-05        
10,000          0.000977        
100,000         0.00781         
1,000,000       0.0625          
10,000,000      1               
100,000,000     8               
1,000,000,000   64              

What does it says? In float32, you cannot expect to have exact values, you will only have discretization points that are hopefully close to the real value. The larger the value, the less close you can possibly be to it.

You can learn more about the IEEE 754 single precision format here, here, and here, and you can even google more about it.

Now back to TF-Lite

What conversion from TensorFlow to TF-Lite has to do with the above said property of float32? Consider the following situation:

sum_1 = a_1 + a_2 + a_3 + a_4 + ... + a_n
sum_2 = a_2 + a_1 + a_4 + a_3 + ... + a_n

i.e. sum_1 and sum_2 only differs in the order of summation. Will they be equal? maybe, or maybe not! Same for other accumulative operations, e.g. multiplications, convolutions, etc. That's the key: in float32 calculation, order matters! (this is similar to the issue where calculations of the same model on CPU and GPU slightly differs). I've stumpled upon this problem countless of times when porting between frameworks (caffe, tensorflow, torch, etc.)

So, even if the implementation of any of the layers in TF-Lite differs even slightly from TensorFlow, you will end up with an error of 1e-5, maximum 1e-4. It is acceptable for single precision floats, so don't be bothered about it.

Upvotes: 2

user9477964
user9477964

Reputation:

While training in TensorFlow,

  • All the variables and constants may be in dtype=float64. These numbers are larger in terms of decimal places.

  • Since, they are training variables there value in not constant.

After converting to TensorFlow lite,

  • The training variables are converted to Constant operations Their values are fixed

  • When we run the lite model on Android or iOS, these values are converted to float32.

Hence the precision is lost in TensorFlow Lite.

Upvotes: 2

Related Questions