Reputation: 1074
I've read the paper Visualizing and Understanding Convolutional Networks by Zeiler and Fergus and would like to make use of their visualization technique. The paper sounds promising - but unfortunately, I have no idea how to implement it in Keras (version 1.2.2).
Two questions:
Keras only provides the Deconvolution2D
Layer but no Unpooling
and no "reverse ReLU" Layer. How can I make use of those switch variables mentioned in the paper in order to implement the unpooling?
How do I have to use the reverse ReLU (or is it just the "normal" ReLU
)?
Keras Deconvolution2D
layer has the attributes activation
and subsample
.
Maybe those are the key for solving my problem?!
If yes, I would have to replace all my combination of Layers Convolution2D
+ Activation
+ Pooling
with a single Deconvolution2D
Layer, right?
I appreciate your help!
Upvotes: 4
Views: 848
Reputation: 11377
The authors of the paper you cite (as far as I remember) talk briefly on how to handle this, specifically:
Now, closer to actual implementation and Keras, have a look at this thread - you will find there some examples that you can use immediately.
Upvotes: 1