pedro_galher
pedro_galher

Reputation: 384

Use ZeroPadding+Convolution instead of Convolution with padding='same'

I've been working with Tensorflow 2.0 for almost a year now and there is a concept that I found in no few codes that I have come across when using the padding 'same' in a Convolution.

Some codes I have seen implements the following:

 x = ZeroPadding2D(padding=(pad, pad))(x)
 x = Conv2D(64, (3, 3), strides=(1, 1), dilation_rate=pad,
                  use_bias=False)(x)

Instead of directly using:

x = Conv2D(64, (3, 3), strides=(1, 1), dilation_rate=pad,
                  use_bias=False, padding='same')(x)

Is there any difference in doing the padding before the convolution with padding='valid', and directly using padding='same' inside the convolution?

I guess there is no difference between the two methods, then why people would use this?

Upvotes: 0

Views: 423

Answers (1)

nessuno
nessuno

Reputation: 27070

There's absolutely zero difference.

Here's the proof

import tensorflow as tf
from tensorflow.keras.layers import Conv2D, ZeroPadding2D

inputs = tf.ones((1, 5, 5, 1))
kernel_initializer = tf.keras.initializers.Constant(2)

pad = 1
x = ZeroPadding2D(padding=(pad, pad))(inputs)
x = Conv2D(
    64,
    (3, 3),
    strides=(1, 1),
    dilation_rate=pad,
    use_bias=False,
    kernel_initializer=kernel_initializer,
)(x)

y = Conv2D(
    64,
    (3, 3),
    strides=(1, 1),
    dilation_rate=pad,
    use_bias=False,
    padding="same",
    kernel_initializer=kernel_initializer,
)(inputs)

tf.assert_equal(x, y)

People do the first, perhaps because they want to remember the formula (?) or because is their stylistic choice - but it makes zero difference in practice

Upvotes: 1

Related Questions