Reputation: 1971
I have a tensorflow-based code which I am running on various computers, some with CPUs and some with both CPUs & GPUs.
If a GPU is available on the machine, I would like to give the user the option of using the CPU instead.
The code from this answer works fine:
import os
import tensorflow as tf
os.environ['CUDA_VISIBLE_DEVICES'] = '-1'
if tf.test.gpu_device_name():
print('GPU found')
else:
print("No GPU found")
# No GPU found
However, I would like to check if a GPU is available first, and then disable it.
I tried:
import tensorflow as tf
if tf.test.gpu_device_name():
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '-1'
if tf.test.gpu_device_name():
print('GPU found')
# GPU found
But it does not work. Once I use tf.test.gpu_device_name()
, it always remembers that the system has a GPU.
I also tried del tf
, importlib.reload(tf)
, to no avail.
The only thing that does work is to quit the interpreter and run the first script above.
How can I make the code "forget" about the GPU once it has been found?
Upvotes: 1
Views: 2174
Reputation: 20214
I don't understand why you need to let the TensorFlow forget. You have GPU that doesn't mean you have to use GPU.
You can use tf.device
to specify the underlying device.
For example:
# Place tensors on the CPU
with tf.device('/CPU:0'):
a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])
c = tf.matmul(a, b)
print(c)
So even though you have GPU, the program will still use CPU.
Upvotes: 2