Reputation: 83167
I have TensorFlow-GPU 1.0.0 installed on a server running Ubuntu 14.04.4 LTS x64.
I know I can use CUDA_VISIBLE_DEVICES
to hide one or several GPUs. Sometimes, I would like to hide all GPUs so that the TensorFlow-based program only uses the CPU. As a result, I tried
username@server:/scratch/coding/src$ CUDA_VISIBLE_DEVICES="" python my_script.py
but this gives me the error message:
E tensorflow/stream_executor/cuda/cuda_driver.cc:509] failed call to cuInit: CUDA_ERROR_NO_DEVICE
Here is the ConfigProto
I use:
session_conf = tf.ConfigProto(
device_count={'CPU': 1, 'GPU': 1},
allow_soft_placement=True,
log_device_placement=False
)
sess = tf.Session(config=session_conf)
I know I could use device_count={'GPU': 0}
to prevent the TensorFlow-based program from using the GPU, but I wonder whether this can be achieved from the command line when launching the program (without changing the ConfigProto
).
The option allow_soft_placement=True
, according to the documentation, is supposed to have TensorFlow automatically choose an existing and supported device to run the operations in case the specified one doesn't exist.
My first reaction when I saw the message was that CUDA needs at least one GPU to successfully load but I read that one can install GPU drivers and use TensorFlow-GPU on a machine even if the machine doesn't have GPU.
Here is the my_script.py
script I use for the test:
import tensorflow as tf
a = tf.constant(1, name = 'a')
b = tf.constant(3, name = 'b')
c = tf.constant(9, name = 'c')
d = tf.add(a, b, name='d')
e = tf.add(d, c, name='e')
session_conf = tf.ConfigProto(
device_count={'CPU': 1, 'GPU': 1},
allow_soft_placement=True,
log_device_placement=False
)
sess = tf.Session(config=session_conf)
print(sess.run([d, e]))
Upvotes: 4
Views: 3578
Reputation: 57893
I think that "E" should've been a "W" or "I", that's just an informational message and should not affect functioning of your program
Upvotes: 1