Reputation: 95
I made a new conda environment and installed tensorflow-gpu from conda (the latest version is 2.5.0). Then, I tested to see if the environment recognizes my GPU, and it does not. It returns
[name: "/device:CPU:0"
device_type: "CPU"
memory_limit: 268435456
locality {
}
incarnation: 1364016363571256103
]
when running the list locality function on tensorflow. What am I missing?
I installed cuDNN and cudatoolkit as dependencies from the conda installation when installing tensorflow-gpu.
cudnn==8.2.1.32
cudatoolkit==11.3.1
The list of commands I ran were:
conda create --name ML4
conda activate ML4
conda install tensorflow-gpu=2.5
and then in python
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
which yielded the above statement which only recognizes my CPU and not my GPU.
Upvotes: 1
Views: 1454
Reputation: 853
TF 2.5 pre built binaries are compatible with CUDA 11.2 and cuDNN 8.1
See tested build configuration chart https://www.tensorflow.org/install/source#gpu.
Therefore you have to roll back to cuda 11.2
Thanks!
Upvotes: 1