Reputation: 1486
I have two saved models. I want to load and connect the output from model-1 to the input for model-2:
# Load model1
model1 = tf.keras.models.load_model('/path/to/model1.h5')
# Load model2
model2 = tf.keras.models.load_model('/path/to/model2.h5')
# get the input/output tensors
model1Output = model1.output
model2Input = model2.input
# reshape to fit
x = Reshape((imageHeight, imageWidth, 3))(model1Output)
# how do I set 'x' as the input to model2?
# this is the combined model I want to train
model = models.Model(inputs=model1.input, outputs=model2.output)
I know you can set the Input when you instantiate a Layer
by passing the input as a parameter (x = Input(shape)
). But how do you set the Input
, in my case x
, on an existing layer? I've looked at the documentation for the Layer
class here, but I can't see this mentioned?
Edit:
Adding the summaries of both models...
Here is the top of model1
:
__________________________________________________________________________________________________
conv2d_transpose_3 (Conv2DTrans (None, 304, 304, 16) 4624 activation_14[0][0]
__________________________________________________________________________________________________
dropout_7 (Dropout) (None, 304, 304, 32) 0 concatenate[3][0]
__________________________________________________________________________________________________
conv2d_17 (Conv2D) (None, 304, 304, 16) 4624 dropout_7[0][0]
__________________________________________________________________________________________________
batch_normalization_17 (BatchNo (None, 304, 304, 16) 64 conv2d_17[0][0]
__________________________________________________________________________________________________
activation_16 (Activation) (None, 304, 304, 16) 0 batch_normalization_17[0][0]
__________________________________________________________________________________________________
conv2d_18 (Conv2D) (None, 304, 304, 10) 170 activation_16[0][0]
==================================================================================================
And here is the input of model2
:
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) [(None, 299, 299, 3) 0
__________________________________________________________________________________________________
block1_conv1 (Conv2D) (None, 149, 149, 32) 864 input_1[0][0]
__________________________________________________________________________________________________
block1_conv1_bn (BatchNormaliza (None, 149, 149, 32) 128 block1_conv1[0][0]
__________________________________________________________________________________________________
block1_conv1_act (Activation) (None, 149, 149, 32) 0 block1_conv1_bn[0][0]
__________________________________________________________________________________________________
block1_conv2 (Conv2D) (None, 147, 147, 64) 18432 block1_conv1_act[0][0]
__________________________________________________________________________________________________
I need the output of conv2d_18
in model1
to be fed as the input to block1_conv1
in model2
.
Upvotes: 1
Views: 963
Reputation: 1486
Found another way to do this which makes more sense to me at least:
# Load model1
model1 = tf.keras.models.load_model('/path/to/model1.h5')
# Load model2
model2 = tf.keras.models.load_model('/path/to/model2.h5')
# reduce the 10 dim channels to 1 dim
newModel2Input = tf.math.reduce_max(model1.output, axis=-1)
# convert to 3 dims to match input expected by model2
newModel2Input = Reshape((299, 299, 3))(newModel2Input)
# this is the combined model I want to train
model = models.Model(inputs=model1.input, outputs=model2(newModel2Input))
Upvotes: 1
Reputation: 1310
suppose you have two models, model1 and model2,you can pass the output from one model to input to the other model,
you can do in this way:
here, model2.layers[1:]
the index 1
is chosen specific for your question to skip the first layer and propagate the input through its 2nd layer of the model.
between models we may require extra convolution layers to fit the shape of input
def mymodel():
# Load model1
model1 = tf.keras.models.load_model('/path/to/model1.h5')
# Load model2
model2 = tf.keras.models.load_model('/path/to/model2.h5')
x = model1.output
#x = tf.keras.models.layers.Conv2D(10,(3,3))(x)
for i,layer in enumerate(model2.layers[1:]):
x = layer(x)
model = keras.models.Model(inputs=model1.input,outputs= x)
return model
Note: Anyone with better solution can edit this answer.
Upvotes: 1