Desperate VS
Desperate VS

Reputation: 61

Tensorflow Lite: Cannot convert between a TensorFlowLite buffer and a ByteBuffer

I have tried to migrate a custom model to Android platform. The tensorflow version is 1.12. I used the command line recommended shown as below:

tflite_convert \
  --output_file=test.tflite \
  --graph_def_file=./models/test_model.pb \
  --input_arrays=input_image \
  --output_arrays=generated_image

to convert .pb file into tflite format.

I have checked input tensor shape of my .pb file in tensorboard:

dtype
{"type":"DT_FLOAT"}
shape
{"shape":{"dim":[{"size":474},{"size":712},{"size":3}]}}

Then I deploy tflite file upon Android, and allocate input ByteBuffer that planed to feed the model as:

imgData = ByteBuffer.allocateDirect(
          4 * 1 * 712 * 474 * 3);

When I run the model on Android device the app crashed and then logcat prints like:

2019-03-04 10:31:46.822 17884-17884/android.example.com.tflitecamerademo E/AndroidRuntime: FATAL EXCEPTION: main
    Process: android.example.com.tflitecamerademo, PID: 17884
    java.lang.RuntimeException: Unable to start activity ComponentInfo{android.example.com.tflitecamerademo/com.example.android.tflitecamerademo.CameraActivity}: java.lang.IllegalArgumentException: Cannot convert between a TensorFlowLite buffer with 786432 bytes and a ByteBuffer with 4049856 bytes.

It's so weird since allocated ByteBuffer is exactly the product of 4 * 3 * 474 * 712 whereas tensorflow lite buffer is not the multiple of 474 or 712. I don't figure out why tflite model got a wrong shape.

Thanks in advance if anyone can give a solution.

Upvotes: 2

Views: 5886

Answers (3)

Pankaj Kant Patel
Pankaj Kant Patel

Reputation: 2060

Hello guys, I also had the similar problem yesterday. I would like to mention solution which works for me.

Seems like TSLite only support exact square bitmap inputs Like Size 256* 256 detection working Size 256* 255 detection not working throwing exception

And max size which is supported 257*257 should be max width and height for any bitmap input

Here is the sample code to crop and resize bitmap

private var MODEL_HEIGHT = 257
private var MODEL_WIDTH = 257

Crop bitmap

val croppedBitmap = cropBitmap(bitmap)

Created scaled version of bitmap for model input

val scaledBitmap = Bitmap.createScaledBitmap(croppedBitmap, MODEL_WIDTH, MODEL_HEIGHT, true)

https://github.com/tensorflow/examples/blob/master/lite/examples/posenet/android/app/src/main/java/org/tensorflow/lite/examples/posenet/PosenetActivity.kt#L578

Crop Bitmap to maintain aspect ratio of model input.

private fun cropBitmap(bitmap: Bitmap): Bitmap {
val bitmapRatio = bitmap.height.toFloat() / bitmap.width
val modelInputRatio = MODEL_HEIGHT.toFloat() / MODEL_WIDTH
var croppedBitmap = bitmap

// Acceptable difference between the modelInputRatio and bitmapRatio to skip cropping.
val maxDifference = 1e-5

// Checks if the bitmap has similar aspect ratio as the required model input.
when {
  abs(modelInputRatio - bitmapRatio) < maxDifference -> return croppedBitmap
  modelInputRatio < bitmapRatio -> {
    // New image is taller so we are height constrained.
    val cropHeight = bitmap.height - (bitmap.width.toFloat() / modelInputRatio)
    croppedBitmap = Bitmap.createBitmap(
      bitmap,
      0,
      (cropHeight / 2).toInt(),
      bitmap.width,
      (bitmap.height - cropHeight).toInt()
    )
  }
  else -> {
    val cropWidth = bitmap.width - (bitmap.height.toFloat() * modelInputRatio)
    croppedBitmap = Bitmap.createBitmap(
      bitmap,
      (cropWidth / 2).toInt(),
      0,
      (bitmap.width - cropWidth).toInt(),
      bitmap.height
    )
  }
}
return croppedBitmap
}

https://github.com/tensorflow/examples/blob/master/lite/examples/posenet/android/app/src/main/java/org/tensorflow/lite/examples/posenet/PosenetActivity.kt#L451 Thanks and Regards Pankaj

Upvotes: 1

Jonathan
Jonathan

Reputation: 686

I had changed the image dimensions from the standard 224 earlier in the model creation process to 299 for other reasons, so I just searched my Android Studio project for 224 and updated the two final references in ImageClassifier.java to 299, and I was back in business.

Upvotes: 0

Sachin Joglekar
Sachin Joglekar

Reputation: 696

You could visualize the TFLite model to debug what buffer sizes are actually allocated to the input tensors.

TensorFlow Lite models can be visualized using the visualize.py script.

If the input tensor's buffer size isn't what you expect it to be, then there might be a bug with the conversion (or with the arguments provided to tflite_convert)

Upvotes: 3

Related Questions