fcc
fcc

Reputation: 177

Why is Qualcomm's QNN x86_64 CPU backend 88x slower than onnxruntime?

I am new to Qualcomm's AI Engine Direct SDK (QNN). Before going directly to deploying AI models to Qualcomm's device, I would like to take a look at QNN's x86_64 backend, which is also relevant to QNN's quantization procedure.

However, I found that Qualcomm's QNN x86_64 CPU backend 88x is slower than onnxruntime for inception_v3.

Here are steps to reproduce the issue:

  1. Setup QNN SDK by Qualcomm's instruction.

  2. Download the model and convert to ONNX

import torch

# Source of model: https://pytorch.org/hub/pytorch_vision_inception_v3/

model = torch.hub.load("pytorch/vision:v0.10.0", "inception_v3", pretrained=True)
model.eval()

x = torch.rand(1, 3, 299, 299)
torch.onnx.export(model, x, "inception_v3.onnx", opset_version=17)
  1. Convert the model to QNN cpp file
${QNN_SDK_ROOT}/bin/x86_64-linux-clang/qnn-onnx-converter \
  --input_network inception_v3.onnx \
  --input_dim 'x.1' 1,3,299,299 \
  --out_node '914' \
  --output_path inception_v3.cpp
  1. Compile
mkdir -p model_libs

${QNN_SDK_ROOT}/bin/x86_64-linux-clang/qnn-model-lib-generator \
  -c inception_v3.cpp \
  -b inception_v3.bin \
  -t x86_64-linux-clang \
  -o model_libs
  1. Generate the model input
import numpy as np

np.random.rand(3, 299, 299).astype(np.float32).tofile("input.raw")

# see rest in Ref: https://github.com/quic/ai-hub-models/issues/17

and

echo input.raw > input.txt
  1. Run the model with profiling
${QNN_SDK_ROOT}/bin/x86_64-linux-clang/qnn-net-run \
              --backend ${QNN_SDK_ROOT}/lib/x86_64-linux-clang/libQnnCpu.so \
              --model model_libs/x86_64-linux-clang/libinception_v3.so \
              --input_list input.txt \
              --profiling_level=basic \
              --keep_num_outputs=0 \
              --num_inferences=10

And view the log

${QNN_SDK_ROOT}/bin/x86_64-linux-clang/qnn-profile-viewer --input_log output/qnn-profiling-data_0.log

Here is the log

Input Log File Location: output/qnn-profiling-data_0.log
Log File Created: Thu Sep 26 08:49:41 2024
Time Scale: 1e-06
Epoch Timestamp: 1727340581547093 Steady Clock Timestamp: 1380276319731
Generated using:
qnn-profile-viewer v2.26.0.240827110523_99241
qnn-net-run        v2.26.0.240827110523_99241
Backend            v2.26.0.240827110523_99241

Qnn Init/Prepare/Finalize/De-Init/Execute/Lib-Load Statistics:
------------------------------------------------------------
Init Stats:
-----------
    NetRun: 171679 us

Compose Graphs Stats:
--------------
    NetRun: 95902 us

Finalize Stats:
---------------
Graph 0 (inception_v3):
    NetRun: 75775 us
    Backend (GRAPH_FINALIZE): 75769 us

De-Init Stats:
--------------
    NetRun: 20778 us
    Backend (null): 0 us

Execute Stats (Overall):
------------------------
    NetRun IPS (includes IO and misc. time): 0.5542 inf/sec

Execute Stats (Average):
------------------------
Total Inference Time:
---------------------
Graph 0 (inception_v3):
    NetRun: 1803480 us
    Backend (GRAPH_EXECUTE): 1803294 us

Execute Stats (Min):
------------------------
Total Inference Time:
---------------------
Graph 0 (inception_v3):
    NetRun: 1754020 us
    Backend (GRAPH_EXECUTE): 1753902 us

Execute Stats (Max):
------------------------
Total Inference Time:
---------------------
Graph 0 (inception_v3):
    NetRun: 1895948 us
    Backend (GRAPH_EXECUTE): 1895815 us

We see that Backend (GRAPH_EXECUTE): 1803294 us. However, run the same onnx by ONNXRuntime CPU provider

import numpy as np
import onnxruntime

x = np.random.rand(1, 3, 299, 299).astype(np.float32)

session = onnxruntime.InferenceSession(
    "inception_v3.onnx", providers=["CPUExecutionProvider"]
)

outputs = session.run(["914"], input_feed={"x.1": x})

import time

N = 100
t1 = time.time()
for _ in range(N):
    outputs = session.run(["914"], input_feed={"x.1": x})
t2 = time.time()

print(f"average inference time = {(t2 - t1)/N*1000} miliseconds")

The output is

average inference time = 21.910243034362793 miliseconds

So I am wondering why QNN's x86_64 CPU backend is significantly slower than onnxruntime (1803.294 ms vs 21.91ms)?

Any help is appreciated.

PS

    62.1ms [  INFO ] [QNN_CPU] QnnGraph execute start
  2086.4ms [  INFO ] [QNN_CPU] QnnGraph execute end

This slow execution time is consistent with the one above for QNN.

Upvotes: 1

Views: 206

Answers (0)

Related Questions