Joao
Joao

Reputation: 1

Different results when running both iOS and Android versions of MLKit Text Recognition On-Device API with the same image as input

this is more a question for the Firebase/MLKit team.

When using the same image as input for the iOS and Android versions of MLKit Text Recognition On-Device API, I do get different results on the bounding boxes information (x,y,width,height) provided by each solution.

Please see below the original input image I've used for my tests and another image that shows the resulting text block's bounding boxes that were drawn based on the information provided by both of the Text Recognition on-device APIs (in blue is the iOS result and in red is the Android one):

Does anyone know what could cause such differences between the results for iOS and Android versions of the API? I suspect they use different ML models for the text recognition / extraction of bounding boxes. If so, is there any chance of having both solutions running the same model in a near future since they are still on beta release?

Any thoughts are welcome!

Upvotes: 0

Views: 407

Answers (1)

Shiyu
Shiyu

Reputation: 935

Your are right. The underlying engines for iOS and Android are different in ML Kit for now. We will update the models to make them consistent in later releases.

Upvotes: 1

Related Questions