Reputation: 93
I am using Firebase ML Text Recognition and Entity Extraction to allow the user to take a picture of the price of a product at a store. The app is supposed to extract the price from the picture and put it in the UI. This is working perfectly fine. However, after the app opens the camera, the user has to take a picture which is then fed to the Text Recognition API. The process of having to take the picture takes some time which takes away the point of using this feature instead of just typing in the price.
Is there a way to process the camera feed without the user having to take the picture?
Upvotes: 0
Views: 207
Reputation: 193
You can use CameraX - it quite easy to get started. Put the recognization logic to analyze
callback, when you are done with that image just call imageProxy.close()
to emit next image from camera feed.
You can find sample here https://github.com/android/camera-samples/tree/main/CameraXBasic
Upvotes: 1