firestreak
firestreak

Reputation: 447

Flutter - Trying to Use Tensorflowlite - FloatEfficientNet

I am attempting to use a model that is successfully inferencing in both native swift and android/java to do the same in flutter, specifically the android side of it.

In this case the values I am receiving are way off.

What I have done so far:

  1. I took the tensorflowlite android example github repo: https://github.com/tensorflow/examples/tree/master/lite/examples/image_classification/android, and found that the FloatEfficientNet option was accurately giving values for my model.

  2. I took the flutter_tflite library, and I modified it so that the inferencing section of the android code matched that tensorflow example above: https://github.com/shaqian/flutter_tflite

  3. I used this tutorial and included repo which uses the above library to inference tensorflow via the platform channel: https://github.com/flutter-devs/tensorflow_lite_flutter

Via the flutter tutorial, I use the camera plugin, which can stream CameraImage objects from the camera's live feed. I pass that into the modified flutter tensorflow library which uses the platform channel to pass the image into the android layer. It does so as a list of arrays of bytes. (3 planes, YuvImage). The tensorflow android example(1) with the working floatefficientnet code, examples a Bitmap. So I am using this method to convert:

    public Bitmap imageToBitmap(List<byte[]> planes, float rotationDegrees, int width, int height) {

        // NV21 is a plane of 8 bit Y values followed by interleaved  Cb Cr
        ByteBuffer ib = ByteBuffer.allocate(width * height * 2);

        ByteBuffer y = ByteBuffer.wrap(planes.get(0));
        ByteBuffer cr = ByteBuffer.wrap(planes.get(1));
        ByteBuffer cb = ByteBuffer.wrap(planes.get(2));
        ib.put(y);
        ib.put(cb);
        ib.put(cr);

        YuvImage yuvImage = new YuvImage(ib.array(),
                ImageFormat.NV21, width, height, null);

        ByteArrayOutputStream out = new ByteArrayOutputStream();
        yuvImage.compressToJpeg(new Rect(0, 0, width, height), 50, out);
        byte[] imageBytes = out.toByteArray();
        Bitmap bm = BitmapFactory.decodeByteArray(imageBytes, 0, imageBytes.length);
        Bitmap bitmap = bm;

        // On android the camera rotation and the screen rotation
        // are off by 90 degrees, so if you are capturing an image
        // in "portrait" orientation, you'll need to rotate the image.
        if (rotationDegrees != 0) {
            Matrix matrix = new Matrix();
            matrix.postRotate(rotationDegrees);
            Bitmap scaledBitmap = Bitmap.createScaledBitmap(bm,
                    bm.getWidth(), bm.getHeight(), true);
            bitmap = Bitmap.createBitmap(scaledBitmap, 0, 0,
                    scaledBitmap.getWidth(), scaledBitmap.getHeight(), matrix, true);
        }
        return bitmap;
    }

The inference is successful, I am able to return the values back to flutter and display the results, but they are way off. Using the same android phone, the results are completely different and way off.

I suspect the flaw is related to the conversion of the CameraImage data format into the Bitmap, since it's the only piece of the whole chain that I am not able to independently test. If anyone who has faced a similar issue could assist I am rather puzzled.

Upvotes: 1

Views: 280

Answers (1)

Antonin GAVREL
Antonin GAVREL

Reputation: 11219

I think the reason is because matrix.postRotate() method expect an integer but you give it a float, so you have an implicit conversion from float to integer which messes it up.

Upvotes: 1

Related Questions