Reputation: 91
| Processes: GPU Memory |
| GPU PID Type Process name Usage
| 0 6944 C python3 11585MiB |
| 1 6944 C python3 11587MiB |
| 2 6944 C python3 10621MiB |
The nvidia-smi
memory is not freed after the tensorflow is stopped in the middle.
Tried to using this
config = tf.ConfigProto()
config.gpu_options.allocator_type = 'BFC'
config.gpu_options.per_process_gpu_memory_fraction = 0.90
config.gpu_options.allow_growth = True
sess = tf.Session(config=config)
Also
with tf.device('/gpu:0'):
with tf.Graph().as_default():
Tried resetting the GPU
sudo nvidia-smi --gpu-reset -i 0
The memory can not be freed at all.
Upvotes: 2
Views: 927
Reputation: 91
The solution was obtained from Tensorflow set CUDA_VISIBLE_DEVICES within jupyter thanks Yaroslav.
Most of the information was obtained from Tensorflow Stackoverflow documentation. I am not allowed to post it. Not sure why.
Insert this at the beginning of your code.
from tensorflow.python.client import device_lib
# Set the environment variables
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
# Double check that you have the correct devices visible to TF
print("{0}\nThe available CPU/GPU devices on your system\n{0}".format('=' * 100))
print(device_lib.list_local_devices())
Different options to start with GPU or CPU. I am using the CPU. Can be changed from the below options
with tf.device('/cpu:0'):
# with tf.device('/gpu:0'):
# with tf.Graph().as_default():
Use the following lines in the session:
config = tf.ConfigProto(device_count={'GPU': 1}, log_device_placement=False,
allow_soft_placement=True)
# allocate only as much GPU memory based on runtime allocations
config.gpu_options.allow_growth = True
sess = tf.Session(config=config)
# Session needs to be closed
sess.close()
with tf.Session(config=config) as sess:
Another helpful article to understand the importance of 'with' Please do check the official tf.Session() from tensorflow.
To find out which devices your operations and tensors are assigned to, create the session with
log_device_placement configuration option set to True.
TensorFlow to automatically choose an existing and supported device to run the operations in case the specified
one doesn't exist, you can set allow_soft_placement=True in the configuration option when creating the session.
Upvotes: 1