djacobs7
djacobs7

Reputation: 11837

How do you run a ONNX model on a GPU?

I'm trying to run an ONNX model

import onnxruntime as ort
import onnxruntime.backend
model_path = "model.onnx"

#https://microsoft.github.io/onnxruntime/
ort_sess = ort.InferenceSession(model_path)


print( ort.get_device()  )

This prints out

cpu

How can I make it run on my GPU? How can I confirm it's working?

Upvotes: 19

Views: 83988

Answers (3)

Abhijit Manepatil
Abhijit Manepatil

Reputation: 957

get_device() command gives you the supported device to the onnxruntime. For CPU and GPU there is different runtime packages are available.

Currently your onnxruntime environment support only CPU because you have installed CPU version of onnxruntime.

If you want to build onnxruntime environment for GPU use following simple steps.

Step 1: uninstall your current onnxruntime

>> pip uninstall onnxruntime

Step 2: install GPU version of onnxruntime environment

>>pip install onnxruntime-gpu

Step 3: Verify the device support for onnxruntime environment

>> import onnxruntime as rt
>> rt.get_device()
'GPU'

Step 4: If you still encounter any issue please check with your cuda and CuDNN versions, that must be compatible to each other. Please refer this link here to understand about the version compatibility between cuda and CuDNN.

Upvotes: 13

bigbro
bigbro

Reputation: 41

your onnxruntime-gpu version should match your cuda and cudnn version,you can check their relations from the offical web site: https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html

Upvotes: 3

Sergii Dymchenko
Sergii Dymchenko

Reputation: 7229

You probably installed the CPU version. Try uninstalling onnxruntime and install GPU version, like pip install onnxruntime-gpu.

Then:

>>> import onnxruntime as ort
>>> ort.get_device()
'GPU'

Upvotes: 29

Related Questions