Kenneth Kwek
Kenneth Kwek

Reputation: 11

Get ByteBuffer from Image for TensorFlow Lite Model

I am creating an android app to run on Google Glass Enterprise Edition 2 that does Real-time Face Recognition. I am using Camera X as my Camera API and TensorFlow Lite (TFLite) as my classification model. However, the TFLite model input requires ByteBuffer which I am unable to convert into from the image retrieved from CameraX.

How do I get my Image from CameraX into ByteBuffer class for my TFLite Model?

Camera X Image Analysis: Reference

            val imageAnalysis = ImageAnalysis.Builder()
                    .setTargetResolution(Size(640, 360))
                    .setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
                    .build()

            imageAnalysis.setAnalyzer(AsyncTask.THREAD_POOL_EXECUTOR, ImageAnalysis.Analyzer { imageProxy ->
                val rotationDegrees = imageProxy.imageInfo.rotationDegrees
                val mediaImage = imageProxy.image

                if (mediaImage != null) {
                    val image = InputImage.fromMediaImage(mediaImage, rotationDegrees)

                    /* Classify the Image using TensorFlow Lite Model */

                }

            })

TensorFlow Model Sample Code

val model = FaceRecognitionModel.newInstance(context)

// Creates inputs for reference.
val inputFeature0 = TensorBuffer.createFixedSize(intArrayOf(1, 224, 224, 3), DataType.FLOAT32)
inputFeature0.loadBuffer(byteBuffer)

// Runs model inference and gets result.
val outputs = model.process(inputFeature0)
val outputFeature0 = outputs.outputFeature0AsTensorBuffer

// Releases model resources if no longer used.
model.close()

Upvotes: 1

Views: 2739

Answers (2)

Kenneth Kwek
Kenneth Kwek

Reputation: 11

I have made some findings and I applied these to help shorten my problem.

  1. The image I have gotten from CameraX is in YUV. I have trained my model in RGB in 224X224. To suit my issue, I first convert the image to RGB Bitmap, then crop it into 224X224. Afterwards convert Bitmap to ByteBuffer.

  2. As for my TFLite Model, the TFLite model managed to accept the converted RGB ByteBuffer image and process it. Returning back TensorBuffer.

Upvotes: 0

yyoon
yyoon

Reputation: 3855

Try using the TensorImage class from the TensorFlow Lite Support Library.

Roughly, you can follow these steps.

  1. Convert the Image object into Bitmap. There should be other Stackoverflow questions on how to do this (e.g., this answer)
  2. Create a TensorImage object from the Bitmap object using TensorImage.fromBitmap() factory.
  3. Call getBuffer() method on the TensorImage object to get the underlying ByteBuffer.

You might also want to do some image pre-processing, in case the image from CameraX doesn't exactly match the format expected by the model. For this, you can explore the ImageProcessor utility.

Upvotes: 2

Related Questions