user3496060
user3496060

Reputation: 856

Check GPU memory used from python in Tensorflow 2.0

There are several threads here and here on SO covering how to get GPU memory in use by Tensorflow within python using a conrib library and a session, but how can we do this within TF 2.0 in eager execution (the contrib library is not available for 2.0)?

Upvotes: 1

Views: 1934

Answers (1)

K. Bogdan
K. Bogdan

Reputation: 535

For now, it seems that this option is not available in TF 2. Some alternatives include:

  • Use python bindings for the NVIDIA Management Library as explained in this issue
  • Get the info by the nvidia-smi command

For the second option, you can do something similar to this answer to get the current memory used in some GPU.

We first get the initial state of the gpu, then we set TF to not use more memory than what is needed (default is to use all available memory), and then we get the current state of the gpu.

import subprocess as sp
import tensorflow as tf

def gpu_memory_usage(gpu_id):
    command = f"nvidia-smi --id={gpu_id} --query-gpu=memory.used --format=csv"
    output_cmd = sp.check_output(command.split())
    
    memory_used = output_cmd.decode("ascii").split("\n")[1]
    # Get only the memory part as the result comes as '10 MiB'
    memory_used = int(memory_used.split()[0])

    return memory_used

# The gpu you want to check
gpu_id = 0

initial_memory_usage = gpu_memory_usage(gpu_id)

# Set up the gpu specified
gpu_physical_devices = tf.config.list_physical_devices('GPU')
for device in gpu_physical_devices:
    if int(device.name.split(":")[-1]) == gpu_id:
        device_to_be_used = device
        # Set memory growth for TF to not use all available memory of the GPU
        tf.config.experimental.set_memory_growth(device, True)

# Just to be sure that we are only using the required gpu
tf.config.set_visible_devices([device_to_be_used], 'GPU')


# Create your model here
# Do cool stuff ....

latest_gpu_memory = gpu_memory_usage(gpu_id)
print(f"(GPU) Memory used: {latest_gpu_memory - initial_memory_usage} MiB")

Do note that we made some assumptions here, such as, no other process started at the same time that ours, and other processes that are already running in the GPU will not need to use more memory.

Upvotes: 2

Related Questions