Reputation: 1125
Anyone tried to run object detection or crnn model on Android? I tried to run crnn model (serialized pytorch) but it takes 1s on Huawei P30 lite and 5s on Samsung J4 Core.
Huawei P30 lite
CPU : octa core processor
GPU : Mali-G51 MP4
Samsung J4
CPU : quad core
GPU : Adreno 308
GPU's in the android device are different from dedicated GPU in the sense that they don't have VRAM and power management. Both CPU and GPU shares the same RAM. Before running model on PC with GPU we specify to place my computation on GPU like
model = MyModel()
model.cuda()
But when I try to run model on Android does it take advantage of this built in GPU? or computation is faster in my Huawei because of this octa core processor, but Huawei obviously has better GPU than my Samsung device.
Upvotes: 3
Views: 1382
Reputation: 487
TFLite works well if you can move from PyTorch.
Specifically, for your Mali GPU as per https://developer.arm.com/ip-products/processors/machine-learning/arm-nn - ArmNN will farm out work appropriately to CPU/GPU, working beneath the Android NNAPI that interfaces with TFLite / Caffe2.
NNAPI info: https://developer.android.com/ndk/guides/neuralnetworks
Upvotes: 1
Reputation: 300
At the moment it is not possible to run pytorch on am ARM-GPU:
I think the differences in speed result out of the differnten cpu's!
Upvotes: 1