xxxyyy
xxxyyy

Reputation: 13

Train Keras model in loop -> working memory increases

I have noticed a strange behavior when training my Keras model.

I have 2 functions:

When I call them like this, the working memory remains more or less constant:

   for i in range(1,100):
        model = generate_net(...)
    for i in range(1,100):
        model = train_net(model=model, ...)

However, if I call it like this, the working memory increases with each iteration (which leads to a crash in the real use case):

   for i in range(1,100):
        model = generate_net(...)
        model = train_net(model=model, ...)

Does anyone know why this behavior occurs?

EDIT: If I add this into the for-loop of the second example the memory still increases from iteration to iteration.

        del model
        gc.collect() 
        tf.keras.backend.clear_session() 
        gc.collect() 

Upvotes: 1

Views: 29

Answers (0)

Related Questions