Vishal
Vishal

Reputation: 3296

Can we specify maximum absolute GPU memory usage

Tensorflow allows to specify the maximum fractional GPU memory used by the process:

import tensorflow as tf
import keras
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.2)
sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))
keras.backend.set_session(sess)

Using the above, my code ends up consuming around 2203MB/7982MB GPU RAM. NOTE: 2203MB is more than 20% of 7982.

Now my code ends up executing on various different GPUs at times. And percentage gpu usage doesn't work well across them (since diff GPU has different amount of RAM)

Is there a possibility to specify the maximum GPU memory fraction to be used in absolute terms, instead of relative terms?

Looking for something like (per_process_gpu_memory_inmb:

# Looking for something like `per_process_gpu_memory_inmb` option
gpu_options = tf.GPUOptions(per_process_gpu_memory_inmb=2203)   
sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))
keras.backend.set_session(sess)

Upvotes: 0

Views: 557

Answers (1)

Vladimir Sotnikov
Vladimir Sotnikov

Reputation: 1489

Sure! You can create a virtual GPU device with hard-coded memory size:

gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
  # Restrict TensorFlow to only allocate 2203 MB of memory on the first GPU
  try:
    tf.config.experimental.set_virtual_device_configuration(
        gpus[0],
        [tf.config.experimental.VirtualDeviceConfiguration(memory_limit=2203)]) # limit in megabytes
    logical_gpus = tf.config.experimental.list_logical_devices('GPU')
    print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs")
  except RuntimeError as e:
    # Virtual devices must be set before GPUs have been initialized
    print(e)

Upvotes: 1

Related Questions