Eypros
Eypros

Reputation: 5723

Keras model.summary function displays incosistent output format

I am studying the in and outs of Keras. So, in this aspect I was checking the model.summary() function.

I was using a simple image classification example provided by Keras itself and loaded the various pretrained models provided (Xception, VGG16 etc).

I checked each model architecture using model.summary() as mentioned. Then I noticed that for some reason the column Connected to (4th column that is) is not present to every model summary. For example for MobileNetV2 I get (just the first few lines are shown):

__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_1 (InputLayer)            (None, 224, 224, 3)  0                                            
__________________________________________________________________________________________________
Conv1_pad (ZeroPadding2D)       (None, 225, 225, 3)  0           input_1[0][0]                    
__________________________________________________________________________________________________
Conv1 (Conv2D)                  (None, 112, 112, 32) 864         Conv1_pad[0][0]      

but for MobileNet I get:

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         (None, 224, 224, 3)       0         
_________________________________________________________________
conv1_pad (ZeroPadding2D)    (None, 225, 225, 3)       0         
_________________________________________________________________
conv1 (Conv2D)               (None, 112, 112, 32)      864       

This output is performed without taking any extra action after the model loading (no training, neither inference etc).

This seems odd and I am not sure what's going on here. For example when creating this simple model from this question here (up to the model0.fit(...) part) and running model0.summary() gives me a summary without Connected to column also contrary to the posted summary in this question.

So, this change to the output? What's the deal with model.summary()? Do we have some control over the output (although the examples above do not imply that)? Or the output has to do with the way a model was structured?

Edit:

I added the (trivial) code used to reproduce the summary of both models as requested in a comment.

from keras.applications.mobilenet_v2 import MobileNetV2
from keras.applications.mobilenet import MobileNet

model1 = MobileNetV2(weights='imagenet')
print(model1.summary())
model2 = MobileNet(weights='imagenet')
print(model2.summary())

Also, my system uses Keras 2.2.4, Tensorflow 1.12.0 and Ubuntu 16.04 if these info are useful somehow.

Upvotes: 1

Views: 1418

Answers (1)

Mikhail Stepanov
Mikhail Stepanov

Reputation: 3790

I suppose the reason is: MobileNetV2 was implemented keras.Model, but MobileNet is keras.Sequential.

Both Model and Sequential have a summary method. While running, it invokes print_summary method, which acts differently for sequential-like and non-sequential models:

if sequential_like:
    line_length = line_length or 65
    positions = positions or [.45, .85, 1.]
    if positions[-1] <= 1:
        positions = [int(line_length * p) for p in positions]
    # header names for the different log elements
    to_display = ['Layer (type)', 'Output Shape', 'Param #']
else:
    line_length = line_length or 98
    positions = positions or [.33, .55, .67, 1.]
    if positions[-1] <= 1:
        positions = [int(line_length * p) for p in positions]
    # header names for the different log elements
    to_display = ['Layer (type)',
                  'Output Shape',
                  'Param #',
                  'Connected to']
    relevant_nodes = []
    for v in model._nodes_by_depth.values():
        relevant_nodes += v 

(link). As you can see, it just doesn't print 'Connected to' for a sequential-like model.
I guess the reason is that sequential model doesn't allow to connect layers in non-sequential order - so, they just connected one-by-one.

Also, it checks a model type via model.__class__.__name__ == 'Sequential' (link). I doubt that it's a good idea trying to cahnge it "on-the-fly" to obtain a different output.

Upvotes: 3

Related Questions