NeStack
NeStack

Reputation: 2014

keras load_model gives TypeError: int() argument 'NoneType'

I am training successfully a neural network with keras and tensorflow on colab, like this:

tf.keras.backend.clear_session()

logpath_ms = './best_model.h5'
modelsave_cb = tf.keras.callbacks.ModelCheckpoint(logpath_ms, monitor='val_loss', mode='min', verbose=1, save_best_only=True)

model = Sequential()
model.add(Bidirectional(LSTM(units=30, return_sequences=True, input_shape = (n_input,X.shape[1]) ) ))
model.add(Dropout(0.2))
model.add(AveragePooling1D(pool_size=(4), strides=4))
model.add(LSTM(units= 30 , return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(units= 30 , return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(units= 30))
model.add(Dropout(0.2))
model.add(Dense(units = 1, activation='linear'))
model.compile(optimizer='adam', loss='mean_squared_error',metrics=['mse'])

model.fit(train_generator, validation_data=val_generator, epochs=10, verbose=1,
             callbacks=[modelsave_cb])

As you see I save the model with callbacks when there is an improvement after an epoch. Unfortunately, when I try to load the model afterwards I get the error message:

model = load_model(logpath_ms)

TypeError                                 Traceback (most recent call last)
<ipython-input-32-ab300646bc5b> in <module>()
----> 1 model = load_model(logpath_ms)

28 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/init_ops.py in _compute_fans(shape)
   1423     fan_in = shape[-2] * receptive_field_size
   1424     fan_out = shape[-1] * receptive_field_size
-> 1425   return int(fan_in), int(fan_out)
   1426 
   1427 

TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'

Here is the link to the colab notebook and here to the h5-model file. I thought the problem might be that the h5 file is empty (because of the error message 'NoneType'), but it's not. There is also not a typo in the file path, else I would get a different message. What's the reason for the error, how can I solve it?

Upvotes: 1

Views: 452

Answers (1)

runDOSrun
runDOSrun

Reputation: 10985

Your model was compiled in TF 1.x but you're executing it in 2.x. If you execute in 1.X, it works. You can put this in the first line in Colab to switch versions:

%tensorflow_version 1.x

Upvotes: 2

Related Questions