Reputation: 87
We built a small cnn using keras, tensorflow. We used keras's functional API for that matter. We're interested in passing last convolutional layer's weights (the one before the fully connected layers) as an input to other cnn.
for simplicity I suggest the next simplified code to discuss upon:
from keras.utils import plot_model
from keras.models import Model
from keras.layers import Input
from keras.layers import Dense
from keras.layers.convolutional import Conv2D
from keras.layers.pooling import MaxPooling2D
visible = Input(shape=(64,64,1))
conv1 = Conv2D(32, kernel_size=4, activation='relu')(visible)
pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)
conv2 = Conv2D(16, kernel_size=4, activation='relu')(pool1)
pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)
hidden1 = Dense(10, activation='relu')(pool2)
output = Dense(1, activation='sigmoid')(hidden1)
model = Model(inputs=visible, outputs=output)
model.compile(optimizer='Adam',
loss=['sparse_categorical_crossentropy', None],
metrics=['accuracy'])
model.fit(train_dataset,
train_labels,
epochs=400,
batch_size=512,
validation_data=(valid_dataset, valid_labels),
verbose=1,
callbacks=[early_stop])
# summarize layers
print(model.summary())
# plot graph
plot_model(model, to_file='convolutional_neural_network.png')
question is: how can I pass pool2 layer as an input to some other simple model using keras, so it will train simultaniously with the first model described above?
Upvotes: 0
Views: 212
Reputation: 381
One possible way would be to add to your model so that everything is contained in a single model that ends with 2 branches. The functional API in keras allows you to define connections between layers however you want, and also provides the infrastructure for having multiple outputs and loss functions.
For example:
from keras.utils import plot_model
from keras.models import Model
from keras.layers import Input
from keras.layers import Dense
from keras.layers.convolutional import Conv2D
from keras.layers.pooling import MaxPooling2D
visible = Input(shape=(64,64,1))
conv1 = Conv2D(32, kernel_size=4, activation='relu')(visible)
pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)
conv2 = Conv2D(16, kernel_size=4, activation='relu')(pool1)
pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)
#add your second model here
X = FirstLayer()(pool2) #replace with your actual network layer
# ...
output2 = YourSecondOutput()(X)
hidden1 = Dense(10, activation='relu')(pool2)
output = Dense(1, activation='sigmoid')(hidden1)
model = Model(inputs=visible, outputs=[output, output2]) #list of outputs
model.compile(optimizer='Adam',
loss=['sparse_categorical_crossentropy', None],
metrics=['accuracy'])
model.fit(train_dataset,
train_labels,
epochs=400,
batch_size=512,
validation_data=(valid_dataset, valid_labels),
verbose=1,
callbacks=[early_stop])
# summarize layers
print(model.summary())
# plot graph
plot_model(model, to_file='convolutional_neural_network.png')
Then you’ll just need to update your inputs to fit so that you have labels for each output. You can find more info in the keras documentation on multi input and output models
Upvotes: 1