BigBadMe
BigBadMe

Reputation: 1852

Creating a new Sequential model inside a for loop (using Keras)

I want to try training my model with different hyperparameters, so I've setup a series of nested for loops to iterate through them.

model = None
batch_generator = None

for sequence_length in all_sequence_length:
    for label_periods in all_label_periods:
        for num_layers in all_num_layers:
            for num_units in all_num_units:
                loadFiles()
                createmodel()
                trainmodel()

The first iteration creates a model like this:

Layer (type)                 Output Shape              Param #
=================================================================
cu_dnnlstm_1 (CuDNNLSTM)     (None, 100, 75)           45300
_________________________________________________________________
dropout_1 (Dropout)          (None, 100, 75)           0
_________________________________________________________________
cu_dnnlstm_2 (CuDNNLSTM)     (None, 100, 75)           45600
_________________________________________________________________
dropout_2 (Dropout)          (None, 100, 75)           0
_________________________________________________________________
cu_dnnlstm_3 (CuDNNLSTM)     (None, 100, 75)           45600
_________________________________________________________________
dropout_3 (Dropout)          (None, 100, 75)           0
_________________________________________________________________
cu_dnnlstm_4 (CuDNNLSTM)     (None, 100, 75)           45600
_________________________________________________________________
dropout_4 (Dropout)          (None, 100, 75)           0
_________________________________________________________________
cu_dnnlstm_5 (CuDNNLSTM)     (None, 100, 75)           45600
_________________________________________________________________
dropout_5 (Dropout)          (None, 100, 75)           0
_________________________________________________________________
dense_1 (Dense)              (None, 3)                 228
=================================================================

I then call model.fit_generator() to train the model and that executes fine. The model is then created again in the next loop iteration and the summary looks like this:

Layer (type)                 Output Shape              Param #
=================================================================
cu_dnnlstm_6 (CuDNNLSTM)     (None, 100, 75)           45300
_________________________________________________________________
dropout_6 (Dropout)          (None, 100, 75)           0
_________________________________________________________________
cu_dnnlstm_7 (CuDNNLSTM)     (None, 100, 75)           45600
_________________________________________________________________
dropout_7 (Dropout)          (None, 100, 75)           0
_________________________________________________________________
cu_dnnlstm_8 (CuDNNLSTM)     (None, 100, 75)           45600
_________________________________________________________________
dropout_8 (Dropout)          (None, 100, 75)           0
_________________________________________________________________
cu_dnnlstm_9 (CuDNNLSTM)     (None, 100, 75)           45600
_________________________________________________________________
dropout_9 (Dropout)          (None, 100, 75)           0
_________________________________________________________________
cu_dnnlstm_10 (CuDNNLSTM)     (None, 100, 75)           45600
_________________________________________________________________
dropout_10 (Dropout)          (None, 100, 75)           0
_________________________________________________________________
dense_2 (Dense)              (None, 3)                 228
=================================================================

You'll see the layer IDs have incremented, which I'm surprised about, as I create a new Sequential model for the model variable, so I'd have expected the same summary as the first.

When I call model.fit_generator() I get this error:

InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'cu_dnnlstm_1_input' with dtype float and shape [?,100,74]

You'll see it's expecting an input for cu_dnnlstm_1_input, which was the input on the model of the first iteration, not cu_dnnlstm_6 on the second model. My code for creating the model is done in a function:

def createmodel():

    global model

    model = Sequential()
    model.add( CuDNNLSTM(units=num_units, return_sequences=True, input_shape=(sequence_length, features_size) ) )

    for _ in range(num_layers):
        model.add( Dropout(dropout_rate) )
        model.add( CuDNNLSTM(units=num_units, return_sequences=True) )

    model.add( Dropout(dropout_rate) )
    model.add( CuDNNLSTM(units=num_units, return_sequences=False) )

    model.add( Dropout(dropout_rate) )
    model.add( Dense(labels_size) )

    model.compile(loss='mean_absolute_error', optimizer='adam')

    model.summary()

The model is trained with this function:

def trainmodel():

    global model

    model.fit_generator(generator=batch_generator,
        epochs=num_epochs,
        steps_per_epoch=num_steps_per_epoch,
        validation_data=validation_data_tuple,
        callbacks=callbacks)

Can anyone spot my 'deliberate' mistake?

Upvotes: 3

Views: 7371

Answers (1)

A cup of tea
A cup of tea

Reputation: 402

I suppose it happens because Keras tries to create different models on the same tensorflow graph. Since your models have different architecture it fails to do it.

Try to import tensorflow:

import tensorflow as tf

and modify your loop this way:

for sequence_length in all_sequence_length:
    for label_periods in all_label_periods:
        for num_layers in all_num_layers:
            for num_units in all_num_units:
                graph = tf.Graph()
                with tf.Session(graph=graph):
                    loadFiles()
                    createmodel()
                    trainmodel()

Upvotes: 3

Related Questions