Jeff
Jeff

Reputation: 5966

How do I use HAL version 1.2 on Tensorflow Lite for Android?

I have a quantized TensorflowLite model that I'm loading onto a Pixel 3 running Android 11. I built the model using Tensorflow Lite 2.5 and I'm using the nightly builds of Tensorflow for Android.

I'm initializing the TFLite Interpreter using the default provided NNAPI delegate.

However, when I load the model, I'm getting the following error from NNAPI:

/OperationsUtils(16219): NN_RET_CHECK failed (frameworks/ml/nn/common/OperationsUtils.cpp:111): Operation QUANTIZE with inputs {TENSOR_FLOAT32} and outputs {TENSOR_QUANT8_ASYMM} is only supported since HAL version 1.2 (validating using HAL version 1.0)
E/Utils   (16219): Validation failed for operation QUANTIZE
E/OperationsUtils(16219): NN_RET_CHECK failed (frameworks/ml/nn/common/OperationsUtils.cpp:111): Operation QUANTIZE with inputs {TENSOR_FLOAT32} and outputs {TENSOR_QUANT8_ASYMM} is only supported since HAL version 1.2 (validating using HAL version 1.0)

Android 11 should support NNAPI 1.2. Is there some parameter I'm missing to TensorFlow or Android to enable support for higher versions on NNAPI?

For reference, here are my dependencies from my gradle file:

dependencies {
    // snip
    implementation 'org.tensorflow:tensorflow-lite:0.0.0-nightly-SNAPSHOT'
    implementation 'org.tensorflow:tensorflow-lite-gpu:0.0.0-nightly-SNAPSHOT'
}

Upvotes: 0

Views: 228

Answers (1)

Jeff
Jeff

Reputation: 5966

It turns out these errors are more warnings coming from NNAPI. Tensorflow Lite is creating the model for all available devices, and NNAPI picks the best one based on the operations. Adding verbose login the eventual result of all of this is that NNAPI decides that the only device capable of processing the model is the qti-default device. The errors are coming from paintbox and the nnapi-reference devices, which are then not used in the execution of the model.

I assumed these messages were the cause of a failure to execute the model on NNAPI, but there is something else wrong.

So the answer to this question is TensorFlow Lite and NNAPI select the best-supported device where possible, despite scary error messages.

Upvotes: 0

Related Questions