D.Laupheimer
D.Laupheimer

Reputation: 1074

How to visualize intermediate feature layers in keras?

I've read the paper Visualizing and Understanding Convolutional Networks by Zeiler and Fergus and would like to make use of their visualization technique. The paper sounds promising - but unfortunately, I have no idea how to implement it in Keras (version 1.2.2).

Two questions:

  1. Keras only provides the Deconvolution2D Layer but no Unpooling and no "reverse ReLU" Layer. How can I make use of those switch variables mentioned in the paper in order to implement the unpooling? How do I have to use the reverse ReLU (or is it just the "normal" ReLU)?

  2. Keras Deconvolution2D layer has the attributes activation and subsample. Maybe those are the key for solving my problem?! If yes, I would have to replace all my combination of Layers Convolution2D + Activation + Pooling with a single Deconvolution2D Layer, right?

I appreciate your help!

Upvotes: 4

Views: 848

Answers (1)

Lukasz Tracewski
Lukasz Tracewski

Reputation: 11377

The authors of the paper you cite (as far as I remember) talk briefly on how to handle this, specifically:

  1. ReLU. The inverse of ReLU is... ReLU. Since convolution is applied to activation function in the forward pass, deconvolution should be applied to rectified reconstructions in the backward pass.
  2. Pooling. There is no way to invert pooling strictly speaking. To cite the paper, "we can obtain an approximate inverse by recording the locations of the maxima within each pooling region in a set of switch variables. In the deconvnet, the unpooling operation uses these switches to place the reconstructions from the layer above into appropriate locations, preserving the structure of the stimulus."

Now, closer to actual implementation and Keras, have a look at this thread - you will find there some examples that you can use immediately.

Upvotes: 1

Related Questions