Reputation: 367
I am using keras with tensor flow to implement a deep auto-encoder with CNN:
So basically the model would be similar to:
input_data = Input(shape=(40,500,1))
#encoder
x= Conv2D(32,kernel_size=(3,3), padding="same",activation='linear')(input_data)
encoded= Conv2D(15,kernel_size=(1,2), strides=(1,2), padding="same",activation='linear')(x)
#decoder
x= Conv2DTranspose(15,kernel_size=(1,2), padding="same",activation='linear')(encoded)
x= Conv2DTranspose(32,kernel_size=(3,3), padding="same",activation='linear')(x)
decoded = Conv2DTranspose(1, (3, 3), activation=activationfuntion, padding="same")(x)
autoencoder = Model(inputs=input_data,outputs=decoded)
encoder = Model(inputs=input_data,outputs=encoded)
In order to save the best model weights during training, I am using ModelCheckpoint:
autoencoder.compile(loss='mean_squared_error', optimizer='rmsprop');
checkpoint=ModelCheckpoint('bestweight.best.hdf5',monitor='val_loss',verbose=1,save_best_only=True,mode='min');
callbacks_list=[checkpoint]
history_info =autoencoder.fit(x_train, x_train,
batch_size=batch_size,
epochs=50,
validation_data=(x_validation,x_validation),
callbacks=callbacks_list,
shuffle=True)
and then later to test on the testdataset:
autoencoder.load_weights('bestweight.best.hdf5');
autoencoder.predict(test_data);
My question is:
I know how to save the best weights of the whole auto-encoder, but is there a way to just save the best training weights of the encoder part so I can use it later for testing. so I can use it in this way:
encoder.load_weights('encoderbestweight.best.hdf5');
encoder.predict(test_data);
Upvotes: 4
Views: 3850
Reputation: 91
The encoder part is the first two layers. So after "autoencoder.fit()" try this
encoder = Model(input_data, autoencoder.layers[2].output)
for more "https://www.kaggle.com/marlesson/autoencoder-embedding-for-food"
Upvotes: 0
Reputation: 51
Before trying to answer your question, I would like to make a quick remark about your use of the ModelCheckpoint callback. Let's have a look at the default parameters :
keras.callbacks.ModelCheckpoint(filepath, monitor='val_loss', verbose=0, save_best_only=False, save_weights_only=False, mode='auto', period=1)
The save_weights_only parameter's default value is False which means what you are actually saving is not only the model's weights but the entire architecture ! Thus, when loading the weights of your model you can either redefine the model and use load_weights. Or you can directly load your model from the file, using the load_model function.
Now, to save only the encoder, I would write a new checkpoint callback, like this :
class CustomCheckpoint(Callback):
def __init__(self, filepath, encoder):
self.monitor = 'val_loss'
self.monitor_op = np.less
self.best = np.Inf
self.filepath = filepath
self.encoder = encoder
def on_epoch_end(self, epoch, logs=None):
current = logs.get(self.monitor)
if self.monitor_op(current, self.best):
self.best = current
# self.encoder.save_weights(self.filepath, overwrite=True)
self.encoder.save(self.filepath, overwrite=True) # Whichever you prefer
As an alternative, since you already have the save file for the entire network, you can separate your encoder from the decoder like this :
from keras.models import load_model
autoencoder = load_model("path_to_file")
encoder = Model(autoencoder.layers[0].input, autoencoder.layers[1].output)
Upvotes: 5