theo123490
theo123490

Reputation: 58

Freezing keras layer doesn't change sumarry trainable params

I have 2 models that have been compiled and trained in the script. now I am trying to concatenate the second to last layer, freeze all the layers, adding new trainable layers.

here are the trained models:

morf_input = keras.layers.Input([np.shape(x)[1]])
morf_layer1 = keras.layers.Dense(800,activation="tanh")(morf_input)
morf_layer2 = keras.layers.Dense(800,activation="tanh" )(morf_layer1)
morf_layer3 = keras.layers.Dense(600,activation="tanh" )(morf_layer2)
morf_layer4 = keras.layers.Dense(300,activation="tanh" )(morf_layer3)
morf_layer5 = keras.layers.Dense(50,activation="tanh" )(morf_layer4)
morf_bneck6 = keras.layers.Dense(30,activation="tanh" )( morf_layer5)
morf_output = keras.layers.Dense(2,activation="sigmoid")(morf_bneck6)

morf_model = keras.models.Model(inputs=morf_input, outputs=morf_output)

and

color_input = keras.layers.Input([np.shape(col_x)[1]])
color_layer1 = keras.layers.Dense(800,activation="tanh")( color_input)
color_layer2 = keras.layers.Dense(800,activation="tanh" )( color_layer1)
color_layer3 = keras.layers.Dense(600,activation="tanh" )( color_layer2)
color_layer4 = keras.layers.Dense(300,activation="tanh" )( color_layer3)
color_layer5 = keras.layers.Dense(50,activation="tanh" )( color_layer4)
color_bneck6 = keras.layers.Dense(10,activation="tanh" )( color_layer5)
color_output = keras.layers.Dense(2,activation="sigmoid")( color_bneck6)

color_model = keras.models.Model(inputs= color_input, outputs= color_output)

then I tried to freeze those layer with:

morf_layer1.trainable = False
morf_layer2.trainable = False
morf_layer3.trainable = False
morf_layer4.trainable = False
morf_layer5.trainable = False
morf_bneck6.trainable = False

color_layer1.trainable = False
color_layer2.trainable = False
color_layer3.trainable = False
color_layer4.trainable = False
color_layer5.trainable = False
color_bneck6.trainable = False

then create a new model with those layers

concat_layer= keras.layers.Concatenate()([morf_bneck6, color_bneck6])
con_out_layer1 = keras.layers.Dense(500,activation="tanh")(concat_layer)
con_out_layer2 = keras.layers.Dense(400,activation="tanh")(con_out_layer1)
con_out_layer3 = keras.layers.Dense(300,activation="tanh")(con_out_layer2)
con_out_layer4 = keras.layers.Dense(30,activation="tanh")(con_out_layer3)
output = keras.layers.Dense(2,activation="sigmoid")(con_out_layer4)

model = keras.models.Model(inputs=[morf_input, color_input], outputs=output)

I compiled the model

model.compile(optimizer=keras.optimizers.SGD(lr=0.008, decay=1e-6, momentum=0.9, nesterov=False),
              loss='binary_crossentropy',
              metrics=['accuracy'])

but the model.summary() shows

Total params: 3,035,432
Trainable params: 3,035,432
Non-trainable params: 0

shouldn't the frozen layer increase the Non-trainable params?

Upvotes: 1

Views: 667

Answers (2)

mujjiga
mujjiga

Reputation: 16896

Since you want to freeze all but last 6 layers use

for layer in model.layers[:-6]:
    layer.trainable = False

Working Example

# Model 1
inputs_1 = keras.layers.Input(shape=(10,))
l_1 = keras.layers.Dense(15,activation="tanh")(inputs_1)
outputs_1 = keras.layers.Dense(2,activation="sigmoid")(l_1)
model_1 = keras.models.Model(inputs_1, outputs_1)
model_1.compile(optimizer=keras.optimizers.SGD(lr=0.008),
              loss='binary_crossentropy',
              metrics=['accuracy'])
print ("Taining Model 1")
model_1.fit(np.random.randn(100,10), np.random.randn(100,2))

# Model 2
inputs_2 = keras.layers.Input(shape=(10,))
l_2 = keras.layers.Dense(15,activation="tanh")(inputs_2)
outputs_2 = keras.layers.Dense(2,activation="sigmoid")(l_2)
model_2 = keras.models.Model(inputs_2, outputs_2)
model_2.compile(optimizer=keras.optimizers.SGD(lr=0.008),
              loss='binary_crossentropy',
              metrics=['accuracy'])
print ("Taining Model 2")
model_2.fit(np.random.randn(100,10), np.random.randn(100,2))

# Combined Model
concat_layer= keras.layers.Concatenate()([outputs_1, outputs_2])
con_out_layer1 = keras.layers.Dense(5,activation="tanh")(concat_layer)
output = keras.layers.Dense(2,activation="sigmoid")(con_out_layer1)

model = keras.models.Model(inputs=[inputs_1, inputs_2], outputs=output)

model.summary()
# Freeze all but last two layers (Concatenate is anyway not a 
# trainable layer)
for layer in model.layers[:-2]:
    layer.trainable = False
model.summary()
model.compile(optimizer=keras.optimizers.SGD(lr=0.008),
              loss='binary_crossentropy',
              metrics=['accuracy'])

print ("Taining Combined Model")
model.fit([np.random.randn(100,10),np.random.randn(100,10)],np.random.randn(100,2))

Sample output

......
Total params: 431
Trainable params: 431
Non-trainable params: 0
......
......
Total params: 431
Trainable params: 37
Non-trainable params: 394

Upvotes: 4

m33n
m33n

Reputation: 1751

Try the following:

  1. Layer definition
  2. Moder instantiation
  3. Set trainable to false
  4. Compile model

Upvotes: 0

Related Questions