Reputation: 581
I have two models.
Both model A and B works with training and test when I run them separately. To be more efficient in training two models with same dataset, I put their running code together.
A.training()
A.close_session() # this closes session with sess.close()
B.training()
at B.training() it occurs Resource exhausted error!
So it seems like it does not release the memory when I do the sess.close() after A.training(). This 'sess' is an attribute both A and B has separately as well. - meaning, it is being used as self.sess
Is this a bug ? Is there a solution?
.
.
I have googled and read some arguments and only closing session does not release the gpu memory though. How can I release the gpu memory so the next model can use it?
Upvotes: 1
Views: 623
Reputation: 1680
Tensorflow does not like multiple sessions call in the same thread.
One workaround is to put your A.training() and B.training() in different processes. Here is a quick walkthrough:
from multiprocessing import Process
def train_func():
train(
learning_rate = 0.001
...
)
p = Process(target=train_func, args=tuple())
p.start()
p.join()
Upvotes: 3