Reputation: 369
I was trying to utilize NNAPI via OnnxRuntime for NN model inferencing on an Android device. Based on the youtube video here: https://www.youtube.com/watch?v=Ij5MoUnLQ0E it is possible to specify the hardware accelerators for operators in the model. Any guidance on how to proceed with that would be grateful.
Upvotes: 0
Views: 169
Reputation: 41
In NNAPI, it is possible to discover which hardware accelerators are present, and to select specific ones for running a model.
If the application specifies more than one accelerator, the NNAPI runtime partitions the work depending on the characteristics of each accelerator and the layers they support.
It is not possible for a program to make that decision on a layer by layer basis. If that's what you need, you may have to break down your model into sub-models.
See the Device discovery and assignment section in the NNAPI documentation for more details.
I am not familiar with OnnxRuntime. I don't know if this package exposes that functionality.
Upvotes: 0