kwagjj
kwagjj

Reputation: 807

onnxruntime not using CUDA

Environment:

while onnxruntime seems to be recognizing the gpu, when inferencesession is created, no longer does it seem to recognize the gpu. the following code shows this symptom.

import  onnxruntime as ort


print(f"onnxruntime device: {ort.get_device()}") # output: GPU

print(f'ort avail providers: {ort.get_available_providers()}') # output: ['CUDAExecutionProvider', 'CPUExecutionProvider']

ort_session = ort.InferenceSession(onnx_file, providers=["CUDAExecutionProvider"])

print(ort_session.get_providers()) # output: ['CPUExecutionProvider']

I have no idea what could cause the inferencesession to not detect and use the CUDA gpu.

I have tried reinstalling onnxruntime-gpu after removing onnxruntime and onnx package, but this problem persists.

any suggestions on where to look at?

Upvotes: 4

Views: 17136

Answers (1)

kwagjj
kwagjj

Reputation: 807

after adding appropriate PATH, LD_LIBRARY_PATH the code works. I guess I neglected to add them because I was so used to not caring about them while using pytorch for a long time.

what I did:

export PATH=/usr/local/cuda-11.4/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda-11.4/lib64:$LD_LIBRARY_PATH

after this, the code in the question worked and the final line gave the output that I wanted.

print(ort_session.get_providers()) # output: ['CUDAExecutionProvider', 'CPUExecutionProvider']

and I can see gpu memory being consumed in nvidia-smi

Upvotes: 3

Related Questions