Reputation: 32497
I tried using Cuda in Pytorch in my set up but it can't be detected and I am puzzled as to why.
torch.cuda.is_available()
return False
. Digging deeper,
torch._C._cuda_getDeviceCount()
returns 0. Using version 1.5, e.g.
$ pip freeze | grep torch
torch==1.5.0
I tried to write a small C program to do the same, e.g.
#include <stdio.h>
#include <cuda_runtime_api.h>
int main() {
int count = 0;
cudaGetDeviceCount(&count);
printf("Device count: %d\n", count);
return 0;
}
prints 1, so the Cuda runtime can obviously find a device. Also, running nvidia-smi
:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 435.21 Driver Version: 435.21 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 106... Off | 00000000:02:00.0 On | N/A |
| 0% 41C P8 9W / 200W | 219MiB / 6075MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
So where did my Cuda device disappear in Python?
Upvotes: 7
Views: 9007
Reputation: 32497
I now just realized that there is a different version if Pytorch for every different minor version of CUDA, so in my case version torch==1.5.0
defaults to CUDA 10.2 apparently, while the special package torch==1.5.0+cu101
works.
I hope this clears things up for other people who like me start reading the docs on PyPi (more up to date docs if you know where to look are here: https://pytorch.org/get-started/locally/)
Upvotes: 6