Cadone
Cadone

Reputation: 101

keras load_model TypeError: int() argument 'NoneType'

I successfully trained a model and i saved it via

model.save("saved_model.h5")

Now i moved the .h5 to another computer and i want to load it in a simple way:

from tensorflow.keras.models import load_model
model=load_model("saved_model.h5")

And i keep getting this error:

Exception has occurred: TypeError
int() argument must be a string, a bytes-like object or a number, not 'NoneType'

I dont understand why , i have checked the TF versions of both pc's and both of them have the version 2.1.0

EDIT1: Here's my architecture in case anyone wonders if i am using some custom layers:

model=Sequential()
model.add(Bidirectional(LSTM(len(cols)*2,input_shape=(batch_size,len(cols)),return_sequences=True)))
model.add(Dropout(0.20))
model.add(Bidirectional(LSTM(units=len(cols), return_sequences=True)))
model.add(Dropout(0.20))
model.add(Bidirectional(LSTM(units=len(cols), return_sequences=False)))
model.add(Dense(units=512,activation='sigmoid'))
model.add(Dense(units=n_future-1,activation='sigmoid'))
model.compile(loss='mae', optimizer='adam',metrics=['mse','mape'])
model.fit(x=test_generator_scaled,epochs=450,verbose=1,validation_data=val_generator_scaled,shuffle=False)
model.save('saved_model.h5')

Upvotes: 2

Views: 631

Answers (1)

Cadone
Cadone

Reputation: 101

So i had to check how it was saving my model, so i used model.to_json() and i had to check if there was any problem in it and this is what the .json saved:

{"class_name": "Sequential", "config":
{"name": "sequential", "layers": [
{"class_name": "InputLayer", "config": {"batch_input_shape": [null, null, null], "dtype": "float32", "sparse": false, "ragged": false, "name": "bidirectional_input"}}, 
{"class_name": "Bidirectional", "config": {"name": "bidirectional", "trainable": true, "dtype": "float32", "layer": 
{"class_name": "LSTM", "config": {"name": "lstm", "trainable": true, "batch_input_shape": [null, 256, 541], "dtype": "float32", "return_sequences": true, "return_state": false, "go_backwards": false, "stateful": false, "unroll": false, "time_major": false, "units": 1082, "activation": "tanh", "recurrent_activation": "sigmoid", "use_bias": true, "kernel_initializer": 
{"class_name": "GlorotUniform", "config": {"seed": null}}, "recurrent_initializer": 
{"class_name": "Orthogonal", "config": {"gain": 1.0, "seed": null}}, "bias_initializer": 
{"class_name": "Zeros", "config": {}}, "unit_forget_bias": true, "kernel_regularizer": null, "recurrent_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "recurrent_constraint": null, "bias_constraint": null, "dropout": 0.0, "recurrent_dropout": 0.0, "implementation": 2}}, "merge_mode": "concat"}}, 
{"class_name": "Dropout", "config": {"name": "dropout", "trainable": true, "dtype": "float32", "rate": 0.2, "noise_shape": null, "seed": null}}, 
{"class_name": "Bidirectional", "config": {"name": "bidirectional_1", "trainable": true, "dtype": "float32", "layer": 
{"class_name": "LSTM", "config": {"name": "lstm_1", "trainable": true, "dtype": "float32", "return_sequences": false, "return_state": false, "go_backwards": false, "stateful": false, "unroll": false, "time_major": false, "units": 541, "activation": "tanh", "recurrent_activation": "sigmoid", "use_bias": true, "kernel_initializer": 
{"class_name": "GlorotUniform", "config": {"seed": null}}, "recurrent_initializer": 
{"class_name": "Orthogonal", "config": {"gain": 1.0, "seed": null}}, "bias_initializer": 
{"class_name": "Zeros", "config": {}}, "unit_forget_bias": true, "kernel_regularizer": null, "recurrent_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "recurrent_constraint": null, "bias_constraint": null, "dropout": 0.0, "recurrent_dropout": 0.0, "implementation": 2}}, "merge_mode": "concat"}}, 
{"class_name": "Dropout", "config": {"name": "dropout_1", "trainable": true, "dtype": "float32", "rate": 0.2, "noise_shape": null, "seed": null}}, 
{"class_name": "Dense", "config": {"name": "dense", "trainable": true, "dtype": "float32", "units": 512, "activation": "sigmoid", "use_bias": true, "kernel_initializer": 
{"class_name": "GlorotUniform", "config": {"seed": null}}, "bias_initializer": 
{"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}}, 
{"class_name": "Dense", "config": {"name": "dense_1", "trainable": true, "dtype": "float32", "units": 59, "activation": "sigmoid", "use_bias": true, "kernel_initializer": 
{"class_name": "GlorotUniform", "config": {"seed": null}}, "bias_initializer":
{"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}}]}, "keras_version": "2.4.0", "backend": "tensorflow"}

After checking the debugging it had problem in the "InputLayer" where the batch_input_shape=[null,null,null] had to changed to my actual batch input shape where it is batch_input_shape=[null,256,541] And it worked then

So a good way to anyone who has this same problem , save the model in .json and start debugging!

Upvotes: 1

Related Questions