Reputation: 1014
If I want to train an autoencoder with tied weights (encoder and decoder has same weight parameters), how to use tf.layers.conv2d
to do that correctly?
I cannot just simply share variables between corresponding conv2d
layers of encoder and decoder, because the weights of decoder is the transpose of that of encoder.
Maybe tied weights are barely used nowadays, but I am just curious.
Upvotes: 0
Views: 762
Reputation: 53766
Use tf.nn.conv2d
(and tf.nn.conv2d_transpose
correspondingly). It's a low-level function that accepts the kernel
variable as an argument.
kernel = tf.get_variable('kernel', [5, 5, 1, 32])
...
encoder_conv = tf.nn.conv2d(images, kernel, strides=[1, 1, 1, 1], padding='SAME')
...
decoder_conv = tf.nn.conv2d_transpose(images, kernel, strides=[1, 1, 1, 1], padding='SAME')
Upvotes: 1