Reputation: 4170
I'm trying to use the TFLite Benchmark tool with mobilenet model and checking the final inference time
in microseconds to compare different models. The issue I am facing is with varying results between runs. I also found this section in the documentation which is pertaining to reducing variance between runs on Android. It explains how one can set the CPU affinity before running the benchmark to get consistent results between runs. Currently using Redmi Note 4 and One Plus for the work.
Please, can someone explain what should I set the CPU affinity
value as for my experiments?
Can I find the affinity masks for different mobiles online or on the Android phone?
When I increase the number of --warmup_runs
parameter I get less varying results. Are there more ways in which I can make my results more consistent?
Are the background processes on the Android phone affecting my the inference time
and is there a way I can stop them to reduce the variance in results?
Upvotes: 1
Views: 832
Reputation: 161
As the docs suggest, any value is fine, as long as you stay consistent with one across experiments. The one thing to consider is whether to use a big core or a little core (if you're a big.little architecture) and usually it's good to try both (they have varying cache sizes, etc.)
Yes you can typically find this information online. See http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0515b/CJHBGEBA.html as an example. You'll want to look at your particular phone, see the particular CPU it uses, and then google for more info from there.
I've tried --warmup_runs = 2000+ and typically it's pretty stable. There's a bit more variance with smaller models. For intensive models (at least for the particular device), you might want to see if the devices are overheating, etc. I haven't seen this for mid-tier phones, but heard that people sometimes keep their devices in a cool area (fan, fridge) for this.
They may, but it's unavoidable. The best you can do is close all applications and disconnect from the internet. I personally haven't seen them introduce too much variance though.
Upvotes: 1