rambo
rambo

Reputation: 383

Enable XNNPack in TFLite v2.3 for ARM64

The TFLite team recently announced XNNPack support in TF v2.3 (https://blog.tensorflow.org/2020/07/accelerating-tensorflow-lite-xnnpack-integration.html). This should provide some pretty impressive speedups on float operations on ARM v8 cores.

Does anyone know how to enable XNNPack for ARM64 builds of TFLite? The benchmarking application in particular would be a good place to test out this new functionality on target hardware. iOS and Android support is enabled by passing a flag to Bazel when compiling. Unfortunately, no guidance is given for building for ARM64 boards. The build instructions (see below) don't provide any updated guidance, and inspecting download_dependencies.sh doesn't show XNNPack being downloaded from anywhere.

https://www.tensorflow.org/lite/guide/build_arm64

Upvotes: 0

Views: 2801

Answers (1)

jdduke
jdduke

Reputation: 159

XNNPACK is not yet supported via Makefile-based builds. We have recently added experimental support for cross-compilation to ARM64 (via --config=elinux_aarch64 in the bazel build command), which should allow build-time opt-in to XNNPACK by also adding --define tflite_with_xnnpack=true in your build command. Expect some improvements in documentation for cross-compilation to ARM64 in the next TF 2.4 release, where we'll also be looking into enabling XNNPACK by default for as many platforms as possible.

Upvotes: 1

Related Questions