Bennett.Yang
Bennett.Yang

Reputation: 95

Is there any way to access layers in tensorflow_hub.KerasLayer object?

I am trying to use a pre-trained model from tensorflow hub into my object detection model. I wrapped a model from hub as a KerasLayer object following the official instruction. Then I realized that I cannot access the layers in this pre-trained model. But I need to use outputs from some specific layers to build my model. Is there any way to access layers in tensorflow_hub.KerasLayer object?

Upvotes: 9

Views: 3192

Answers (4)

cyber-monk
cyber-monk

Reputation: 5570

This doesn't give you programmatic access to the layers, but it does allow you to inspect them.

import tensorflow as tf
import tensorflow_hub as hub

resnet_v2 = hub.load(os.path.join(tfhub_dir, 'imagenet_resnet_v2_50_classification_5'))

print(tf.__version__)
resnet_v2.summary()
single_keras_layer = resnet_v2.layers[0]
variables = single_keras_layer.variables

for i, v in enumerate(variables):
    print('[{:03d}] {} [{}]'.format(i, v.name, v.shape))

Output

2.13.0
Model: "sequential_6"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 keras_layer_6 (KerasLayer)  (None, 1001)              25615849  
                                                                 
=================================================================
Total params: 25615849 (97.72 MB)
Trainable params: 0 (0.00 Byte)
Non-trainable params: 25615849 (97.72 MB)
_________________________________________________________________
[000] resnet_v2_50/block2/unit_1/bottleneck_v2/shortcut/biases:0 [(512,)]
[001] resnet_v2_50/block2/unit_4/bottleneck_v2/conv1/BatchNorm/gamma:0 [(128,)]
[002] resnet_v2_50/block3/unit_1/bottleneck_v2/conv2/weights:0 [(3, 3, 256, 256)]
[003] resnet_v2_50/block4/unit_1/bottleneck_v2/conv3/biases:0 [(2048,)]
[004] resnet_v2_50/block1/unit_1/bottleneck_v2/shortcut/biases:0 [(256,)]
[005] resnet_v2_50/block3/unit_2/bottleneck_v2/preact/gamma:0 [(1024,)]
[006] resnet_v2_50/block3/unit_3/bottleneck_v2/conv1/BatchNorm/gamma:0 [(256,)]
[007] resnet_v2_50/block4/unit_3/bottleneck_v2/conv1/BatchNorm/gamma:0 [(512,)]
[008] resnet_v2_50/block1/unit_1/bottleneck_v2/preact/gamma:0 [(64,)]
[009] resnet_v2_50/block1/unit_2/bottleneck_v2/conv3/weights:0 [(1, 1, 64, 256)]
[010] resnet_v2_50/block2/unit_1/bottleneck_v2/preact/gamma:0 [(256,)]
[011] resnet_v2_50/block2/unit_1/bottleneck_v2/conv2/BatchNorm/gamma:0 [(128,)]
[012] resnet_v2_50/block2/unit_3/bottleneck_v2/conv3/biases:0 [(512,)]
...
[268] resnet_v2_50/block4/unit_1/bottleneck_v2/preact/moving_variance:0 [(1024,)]
[269] resnet_v2_50/block4/unit_1/bottleneck_v2/conv2/BatchNorm/moving_variance:0 [(512,)]
[270] resnet_v2_50/block2/unit_2/bottleneck_v2/conv1/BatchNorm/moving_variance:0 [(128,)]
[271] resnet_v2_50/block1/unit_3/bottleneck_v2/preact/moving_mean:0 [(256,)]

Upvotes: 1

ma710u5
ma710u5

Reputation: 21

Since return_endpoints=True doesn't seem to work anymore.

You can do this :

efficientnet_lite0_base_layer = hub.KerasLayer(
    "https://tfhub.dev/tensorflow/efficientnet/lite0/feature-vector/2",
    output_shape=[1280],
    trainable=False
)

print("Thickness of the model:", len(efficientnet_lite0_base_layer.weights))
print ("{:<80} {:<20} {:<10}".format('Layer','Shape','Type'))

for i in range(len(efficientnet_lite0_base_layer.weights)):
    model_weights_raw_string = str(efficientnet_lite0_base_layer.weights[i])
    model_weights_wo_weights = model_weights_raw_string.split(", numpy", 1)[0]
    dtype = model_weights_wo_weights.split(" dtype=")[1]
    shape = model_weights_wo_weights.split(" shape=")[1].split(" dtype=")[0]
    
    print ("{:<80} {:<20} {:<10}".format(efficientnet_lite0_base_layer.weights[i].name, shape, dtype))

Upvotes: 1

egormkn
egormkn

Reputation: 208

There is an undocumented way to get intermediate layers out of some TF2 SavedModels exported from TF-Slim, such as https://tfhub.dev/google/imagenet/inception_v1/feature_vector/4: passing return_endpoints=True to the SavedModel's __call__ function changes the output to a dict.

NOTE: This interface is subject to change or removal, and has known issues.

model = tfhub.KerasLayer('https://tfhub.dev/google/imagenet/inception_v1/feature_vector/4', trainable=False, arguments=dict(return_endpoints=True))
input = tf.keras.layers.Input((224, 224, 3))
outputs = model(input)
for k, v in sorted(outputs.items()):
  print(k, v.shape)

Output for this example:

InceptionV1/Conv2d_1a_7x7 (None, 112, 112, 64)
InceptionV1/Conv2d_2b_1x1 (None, 56, 56, 64)
InceptionV1/Conv2d_2c_3x3 (None, 56, 56, 192)
InceptionV1/MaxPool_2a_3x3 (None, 56, 56, 64)
InceptionV1/MaxPool_3a_3x3 (None, 28, 28, 192)
InceptionV1/MaxPool_4a_3x3 (None, 14, 14, 480)
InceptionV1/MaxPool_5a_2x2 (None, 7, 7, 832)
InceptionV1/Mixed_3b (None, 28, 28, 256)
InceptionV1/Mixed_3c (None, 28, 28, 480)
InceptionV1/Mixed_4b (None, 14, 14, 512)
InceptionV1/Mixed_4c (None, 14, 14, 512)
InceptionV1/Mixed_4d (None, 14, 14, 512)
InceptionV1/Mixed_4e (None, 14, 14, 528)
InceptionV1/Mixed_4f (None, 14, 14, 832)
InceptionV1/Mixed_5b (None, 7, 7, 832)
InceptionV1/Mixed_5c (None, 7, 7, 1024)
InceptionV1/global_pool (None, 1, 1, 1024)
default (None, 1024)

Issues to be aware of:

  • Undocumented, subject to change or removal, not available consistently.
  • __call__ computes all outputs (and applies all update ops during training) irrespective of the ones being used later on.

Source: https://github.com/tensorflow/hub/issues/453

Upvotes: 4

For one to be able to do that easily, the creator of the pretrained model would have needed to make that output ready to be accessed. E.g. by having an extra function or an extra signature that outputs the activation you want to use.

Upvotes: 0

Related Questions