Reputation: 23
from tensorflow.python.client import device_lib
device_lib.list_local_devices()
tf.config.list_physical_devices('GPU')
Running the code above is giving me this output :
[name: "/device:CPU:0"
device_type: "CPU"
memory_limit: 268435456
locality {
}
incarnation: 932414320379148726]
[]
I am using RTX 3060 Ti along with CUDA 11.1 and cudnn 8+ on Python 3.8.5 I have tried tensorlfow-gpu (2.3, 2.4 and 2.5 dev) but none detecting GPU any solution???.
Upvotes: 0
Views: 2280
Reputation: 1
With tensorflow you should do :
import tensorflow as tf
# Get the GPU device name.
device_name = tf.test.gpu_device_name()
# The device name should look like the following:
if device_name == '/device:GPU:0':
print('Found GPU at: {}'.format(device_name))
else:
raise SystemError('GPU device not found')
Did you have the same error using pytorch ?
import torch
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print(device)
Upvotes: 0