Siddharth
Siddharth

Reputation: 98

RuntimeError: The layer has never been called and thus has no defined output shape

I am trying to add attention to pretrained vgg16 network. I am trying to get the output shape of the last layer but it's throwing an error. This is the code,

img_shape = (224,224,3)
in_lay = Input(img_shape)
base_pretrained_model = VGG16(input_shape = img_shape, 
                              include_top = False, weights = 'imagenet')
base_pretrained_model.trainable = False
pt_depth = base_pretrained_model.get_output_shape_at(0)[-1]
pt_features = base_pretrained_model(in_lay)
bn_features = BatchNormalization()(pt_features)

attn_layer = Conv2D(64, kernel_size = (1,1), padding = 'same', activation = 'relu')(bn_features)
attn_layer = Conv2D(16, kernel_size = (1,1), padding = 'same', activation = 'relu')(attn_layer)
attn_layer = Conv2D(1, 
                    kernel_size = (1,1), 
                    padding = 'valid', 
                    activation = 'sigmoid')(attn_layer)
up_c2_w = np.ones((1, 1, 1, pt_depth))
up_c2 = Conv2D(pt_depth, kernel_size = (1,1), padding = 'same', 
               activation = 'linear', use_bias = False, weights = [up_c2_w])
up_c2.trainable = False
attn_layer = up_c2(attn_layer)

mask_features = multiply([attn_layer, bn_features])
gap_features = GlobalAveragePooling2D()(mask_features)
gap_mask = GlobalAveragePooling2D()(attn_layer)

gap = Lambda(lambda x: x[0]/x[1], name = 'RescaleGAP')([gap_features, gap_mask])
gap_dr = Dropout(0.5)(gap)
dr_steps = Dropout(0.25)(Dense(128, activation = 'elu')(gap_dr))
out_layer = Dense(1, activation = 'sigmoid')(dr_steps)
tb_model = Model(inputs = [in_lay], outputs = [out_layer])

tb_model.compile(optimizer = 'adam', loss = 'binary_crossentropy',
                           metrics = ['binary_accuracy'])

tb_model.summary() 

I am getting an error on the 6th line which says,

RuntimeError: The layer has never been called and thus has no defined output shape.

Upvotes: 0

Views: 612

Answers (1)

Ijaz Ahmad
Ijaz Ahmad

Reputation: 41

Instead of

    pt_depth = base_pretrained_model.get_output_shape_at(0)[-1]

Try this one:

    pt_depth = base_pretrained_model.layers[-1].output_shape

Since, include_top=False the output will be: (None, 7, 7, 512) that is the shape of the last layer "block5_pool (MaxPooling2D)"

Upvotes: 1

Related Questions