Kumar Govindam
Kumar Govindam

Reputation: 115

How can I know whether a tensorflow tensor is in cuda or cpu?

How can I know whether tensorflow tensor is in cuda or cpu? Take this very simple example:

import tensorflow as tf
tf.debugging.set_log_device_placement(True)

# Place tensors on the CPU
with tf.device('/device:GPU:0'):
   a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
   b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])

# print tensor a
print(a)

# Run on the GPU
c = tf.matmul(a, b)
print(c)

The code runs fine. Here, I am physically placing tensor 'a' and 'b' on the GPU. While printing 'a', I get:

tf.Tensor(
  [[1. 2. 3.]
  [4. 5. 6.]], shape=(2, 3), dtype=float32)

It does not give any info whether 'a' in CPU or GPU. Now, suppose that there is an intermediate tensor like tensor 'c' which gets created during some operation. How can I know that tensor 'c' is a CPU or a GPU tensor? Also, suppose the tensor is placed on GPU. How can I move it to CPU?

Upvotes: 8

Views: 9452

Answers (2)

sebastian-sz
sebastian-sz

Reputation: 1488

As of Tensorflow 2.3 you can use .device property of a Tensor:

import tensorflow as tf

a = tf.constant([1, 2, 3])
print(a.device)  # /job:localhost/replica:0/task:0/device:CPU:0

More detailed explanation can be found here

Upvotes: 7

MichaelJanz
MichaelJanz

Reputation: 1815

You may refer to the memory management in PyTorch, where you explicitly define in which memory the tensor is saved. To my knowledge, this is not supported in Tensorflow (Talking about 2.X) and you either work on the CPU or GPU. This is decided, depending on your TF-Version, at the first declaration of a Tensor. As far as I know, the GPU is used by default, else it has to be specified explicitly before you start any Graph Operations.

Thumb rule: If you have a working cuda environment and a TF version that supports GPU by default, it will be always on the GPU, else on the CPU, except if you define it manually.

Refering to the answer of Patwie in SO

Upvotes: 1

Related Questions