Reputation: 109
I have installed tensorflow-gpu version 1.15 on my profile on a cluster, which has access to 2 GPUs. I was able to verify this by running
from tensorflow.python.client import device_lib
device_lib.list_local_devices()
The above statements yield the list of local devices as:
[name: "/device:CPU:0"
device_type: "CPU"
memory_limit: 268435456
locality {
}
incarnation: 17161457237421390575,
name: "/device:XLA_CPU:0"
device_type: "XLA_CPU"
memory_limit: 17179869184
locality {
}
incarnation: 2136131381156225295
physical_device_desc: "device: XLA_CPU device",
name: "/device:XLA_GPU:0"
device_type: "XLA_GPU"
memory_limit: 17179869184
locality {
}
incarnation: 5626920946153973344
physical_device_desc: "device: XLA_GPU device",
name: "/device:XLA_GPU:1"
device_type: "XLA_GPU"
memory_limit: 17179869184
locality {
}
incarnation: 1069390960246559975
physical_device_desc: "device: XLA_GPU device"]
which clearly shows the GPU devices listed. On further search, I learned that XLA_GPU correlates to a GPU capable of supporting tensorflow linear algebra routines. However, when I run the GPU test function
tf.test.is_gpu_available()
the output is False. I'm confused as to whether the GPU is not being detected here, or there is an issue with the tensorflow-gpu installation (which was through pip). Any inputs on this would be appreciated.
Upvotes: 2
Views: 4085
Reputation:
The recommended way is to check if TensorFlow is using GPU is the following:
tf.config.list_physical_devices('GPU')
Output:
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
The following will also return the name of your GPU devices.
import tensorflow as tf
tf.test.gpu_device_name()
If a non-GPU
version of the package is installed, the function would also return False
. Use tf.test.is_built_with_cuda
to validate if TensorFlow was build with CUDA support.
Note: tf.test.is_gpu_available
is deprecated. Please refer here
Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.config.list_physical_devices('GPU') instead.
Best way to test is to run code and check that GPU is using with nvidia-smi
as mentioned by Matias Valdenegro or run simple code as below
import tensorflow as tf
with tf.device('/GPU:0'):
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
c = tf.matmul(a, b)
with tf.compat.v1.Session() as sess:
print (sess.run(c))
Output:
[[22. 28.]
[49. 64.]]
Upvotes: 3