Reputation: 3
I am developing an application in Android Studio using a TensorFlow Lite model. When running the app, I encounter the following error:
java.lang.AssertionError: TensorFlow Lite does not support data type INT32
Below is the relevant part of my code:
// Prepare input tensor
val inputFeature0 = TensorBuffer.createFixedSize(inputShape, DataType.FLOAT32)
inputFeature0.loadArray(flatArray)
// Run inference
val outputs = model?.process(inputFeature0)
val rawOutputBuffer = outputs?.outputFeature0AsTensorBuffer
// Extract raw data as IntArray or FloatArray based on the data type
val outputArray = when (rawOutputBuffer?.dataType) {
DataType.INT32 -> rawOutputBuffer.intArray // Directly access INT32 data
DataType.FLOAT32 -> rawOutputBuffer.floatArray.map { it.toInt() }.toIntArray() // Convert FloatArray to IntArray
else -> throw IllegalArgumentException("Unsupported output tensor data type: ${rawOutputBuffer?.dataType}")
}
The input tensor is of type FLOAT32
, and the input data is loaded correctly using TensorBuffer.createFixedSize()
and loadArray()
.
When processing the model's output tensor (outputFeature0AsTensorBuffer
), I added checks to handle both FLOAT32
and INT32
outputs.
Despite this, the app crashes with the error indicating that TensorFlow Lite does not support INT32
.
What I have tried:
I expected the model inference to run without issues since I have handled both FLOAT32 and INT32 output cases.
Questions:
Upvotes: 0
Views: 32