Gepeto97
Gepeto97

Reputation: 190

Confirm that TF2 is using my GPU when training

I am wondering if there is a way to confirm that my TF model is training on my GPU after I stored the training data on it as advised in the TF tutorial. Here is a short code example:

import tensorflow as tf

print('Num GPUs Available:', len(tf.config.experimental.list_physical_devices('GPU')))

# load data on GPU
with tf.device('/GPU:0'):
    mnist = tf.keras.datasets.mnist
    (x_train, y_train), (x_test, y_test) = mnist.load_data()
    x_train, x_test = x_train / 255.0, x_test / 255.0
# define, compile and train the model
model = tf.keras.models.Sequential([tf.keras.layers.Dense(1)])
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['acc'])
model.fit(x_train, y_train, batch_size=32, epochs=5)

Upvotes: 3

Views: 11136

Answers (3)

Rohit Dhankar
Rohit Dhankar

Reputation: 1644

now here in AUG 23 -- use this tf.config.list_physical_devices('GPU') Source Official TF Install page. tensorflow.org/install/pip

Upvotes: 0

bsquare
bsquare

Reputation: 986

There is an easier way to achieve this:

import tensorflow as tf
if tf.test.gpu_device_name():
    print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
else:
    print(""Please install GPU version of TF"")

(or)

sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))

(or)

Few helpful functions appeared in TF:

Tells if the gpu is available

tf.test.is_gpu_available()

Returns the name of the gpu device"

tf.test.gpu_device_name()  

Upvotes: 0

Lukasz Tracewski
Lukasz Tracewski

Reputation: 11377

There a couple of ways to check for GPU in Tensorflow 2.x. Essentially, if GPU is available, then the model will be run on it (unless it's busy by e.g. another instance of TF that locked it). The placement will be seen also in the log files and can be confirmed with e.g. nvidia-smi.

In the code below, I will assume tensorflow is imported as tf (per convention and your code).

To check what devices are available, run:

tf.config.experimental.list_physical_devices()

Here's my output:

[PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU'), PhysicalDevice(name='/physical_device:XLA_CPU:0', device_type='XLA_CPU'), PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU'), PhysicalDevice(name='/physical_device:XLA_GPU:0', device_type='XLA_GPU')]

In order to check if there is any GPU on the system:

is_gpu = len(tf.config.experimental.list_physical_devices('GPU')) > 0

From Tensorflow 2.1, this functionality has been migrated from experimental and you can use: tf.config.list_physical_devices() in the same manner, i.e.

is_gpu = len(tf.config.list_physical_devices('GPU')) > 0 

At some point in time the experimental part will be deprecated.

Last but not least, if your tensorflow was built without CUDA (it's a non-GPU version), list_physical_devices('GPU') will also return False, even if your system physicaly has a GPU.

"Is it automatic once the gpu is recognized by TF?"

Yes. To quote after TF docs:

Note: Use tf.config.experimental.list_physical_devices('GPU') to confirm that TensorFlow is using the GPU.

If it is recognised, it will be used during the training. If you'd like to be dead sure, you can ask for more explicit logging:

tf.debugging.set_log_device_placement(True)

Upvotes: 8

Related Questions