Reputation: 2279
I'm training multiple models sequentially, which will be memory-consuming if I keep all models without any cleanup. However, I am not aware of any way to the graph and free the GPU memory in Tensorflow 2.x.
tf.keras.backend.clear_session
does not work in my case as I've defined some custom layers
tf.compat.v1.reset_default_graph
does not work either.
Upvotes: 1
Views: 5177
Reputation:
Few workarounds to avoid the memory growth. Use either one
1.
del model
tf.keras.backend.clear_session()
gc.collect()
Enable allow_growth
(e.g. by adding TF_FORCE_GPU_ALLOW_GROWTH=true
to the environment). This will prevent TF from allocating all of the GPU memory on first use, and instead "grow" its memory footprint over time.
Enable the new CUDA malloc async allocator by adding TF_GPU_ALLOCATOR=cuda_malloc_async
to the environment.
Upvotes: 1