Vietnt134
Vietnt134

Reputation: 511

Switching to Camera2 in Android Vision API

I saw that in android vision api (the sample is here: https://github.com/googlesamples/android-vision) camera (camera1) is now deprecated and the recommend is to use camera2.

Do you guys have any idea how to re-write CameraSource to use camera2 on android vision?

Thanks in advance,

Upvotes: 12

Views: 3345

Answers (3)

Vietnt134
Vietnt134

Reputation: 511

I haven't tried the link below because I stopped working on the Google Android Vision, but I think it is good for those who want:

https://medium.com/@mt1729/an-android-journey-barcode-scanning-with-mobile-vision-api-and-camera2-part-1-8a97cc0d6747

Upvotes: 1

Ezequiel Adrian
Ezequiel Adrian

Reputation: 816

It is possible to use Camera2 API with Google Vision API.

To start with, the Google Vision API Face Detector receives a Frame object that uses to analyze (detect faces and its landmarks).

The Camera1 API provides the preview frames in NV21 image format, which is ideal for us. The Google Vision Frame.Builder supports both setImageData (ByteBuffer in NV16, NV21 or YV12 image format) and setBitmap to use a Bitmap as the Preview Frames to process.

Your issue is that the Camera2 API provides the preview frames in a different format. It is YUV_420_888. To make everything work you have to convert the preview frames into one of the supported formats.

Once you get the Camera2 Preview Frames from your ImageReader as Image you can use this function to convert it to a supported format (NV21 in this case).

private byte[] convertYUV420888ToNV21(Image imgYUV420) {
    // Converting YUV_420_888 data to YUV_420_SP (NV21).
    byte[] data;
    ByteBuffer buffer0 = imgYUV420.getPlanes()[0].getBuffer();
    ByteBuffer buffer2 = imgYUV420.getPlanes()[2].getBuffer();
    int buffer0_size = buffer0.remaining();
    int buffer2_size = buffer2.remaining();
    data = new byte[buffer0_size + buffer2_size];
    buffer0.get(data, 0, buffer0_size);
    buffer2.get(data, buffer0_size, buffer2_size);
    return data;
}

Then you can use the returned byte[] to create a Google Vision Frame:

outputFrame = new Frame.Builder()
    .setImageData(nv21bytes, mPreviewSize.getWidth(), mPreviewSize.getHeight(), ImageFormat.NV21)
    .setId(mPendingFrameId)
    .setTimestampMillis(mPendingTimeMillis)
    .setRotation(mSensorOrientation)
    .build();

Finally, you call the detector with the created Frame:

mDetector.receiveFrame(outputFrame);

Anyway, if you want to know more about this you can test my working example available for free on GitHub: Camera2Vision. I hope I've helped :)

Upvotes: 3

ashishdhiman2007
ashishdhiman2007

Reputation: 817

Please have a look

camera2 with mobile vision? #65

Ok, I found this

There are no near term plans for a camera2 version of the CameraSource class in the official API. However, given how the API is structured, an alternate version of CameraSource could be written by the developer community that uses camera2. All of the existing APIs for working with frames and detectors are sufficient to support a camera2 implementation as well.

Upvotes: 2

Related Questions