oezguensi
oezguensi

Reputation: 950

Converted ONNX model runs on CPU but not on GPU

I converted a TensorFlow Model to ONNX using this command: python -m tf2onnx.convert --saved-model tensorflow-model-path --opset 10 --output model.onnx The conversion was successful and I can inference on the CPU after installing onnxruntime.

But when I create a new environment, install onnxruntime-gpu on it and inference using GPU, I get different error messages based on the model. E.g. for MobileNet I receive W:onnxruntime:Default, cuda_execution_provider.cc:1498 GetCapability] CUDA kernel not supported. Fallback to CPU execution provider for Op type: Conv node name: StatefulPartitionedCall/mobilenetv2_1.00_224/Conv1/Conv2D

I tried out different opsets. Does someone know why I am getting errors when running on GPU

Upvotes: 1

Views: 11070

Answers (1)

Hariharan Seshadri
Hariharan Seshadri

Reputation: 106

That is not an error. That is a warning and it is basically telling you that that particular Conv node will run on CPU (instead of GPU). It is most likely because the GPU backend does not yet support asymmetric paddings and there is a PR in progress to mitigate this issue - https://github.com/microsoft/onnxruntime/pull/4627. Once this PR is merged, these warnings should go away and such Conv nodes will run on the GPU backend.

Upvotes: 1

Related Questions