robertradar
robertradar

Reputation: 65

How can I check whether I use CPU or GPU in TensorFlow?

I've read that

os.environ["CUDA_VISIBLE_DEVICES"] = ''

takes care that tensorflow will run on CPU and that

os.environ["CUDA_VISIBLE_DEVICES"] = '0'

takes care that tensorflow will run on GPU 0.

How can I check, which device is used?

The code

# Creates a graph.
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
c = tf.matmul(a, b)
# Creates a session with log_device_placement set to True.
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
# Runs the op.
print(sess.run(c))

shows only the result

[[ 22.  28.]
 [ 49.  64.]]

and no used device etc.

Upvotes: 1

Views: 2335

Answers (2)

robertradar
robertradar

Reputation: 65

I found that there are differences between Python and IPython. IPython is the used Kernel in Spyder. So I'm guessing that's the reason for those differing outputs.

Upvotes: 0

bivouac0
bivouac0

Reputation: 2560

You should be able to do this by turning on the tensorflow logging statements. There's a few ways to do this. You can do it with a bash environment variable with..

export TF_CPP_MIN_LOG_LEVEL=1

or from within your code with..

tf.logging.set_verbosity(tf.logging.INFO)

On my system I get something like...

Device mapping: /job:localhost/replica:0/task:0/device:XLA_CPU:0 -> device: XLA_CPU device /job:localhost/replica:0/task:0/device:XLA_GPU:0 -> device: XLA_GPU device /job:localhost/replica:0/task:0/device:GPU:0 -> device: 0, name: GeForce GTX TITAN X, pci bus id: 0000:65:00.0, compute capability: 5.2 MatMul: (MatMul): /job:localhost/replica:0/task:0/device:GPU:0 a: (Const): /job:localhost/replica:0/task:0/device:GPU:0 b: (Const): /job:localhost/replica:0/task:0/device:GPU:0 [[22. 28.] [49. 64.]]

Upvotes: 1

Related Questions