Mark
Mark

Reputation: 105

What are the advantages of conv2d_same over conv2d(,padding='SAME') in faster rcnn?

Why use conv2d_same instead of normal conv2d(..., padding='SAME') in faster rcnn?

conv2d_same code is on Tensorflow GitHub.

Upvotes: 2

Views: 389

Answers (1)

Maxim
Maxim

Reputation: 53768

conv2d_same is absolutely the same as conv2d when stride == 1.

When stride > 1, i.e. the intention is to downsample the tensor, the applied padding is a bit different. From the documentation:

When stride > 1, then we do explicit zero-padding, followed by conv2d with 'VALID' padding. Note that

net = conv2d_same(inputs, num_outputs, 3, stride=stride)

is equivalent to

net = slim.conv2d(inputs, num_outputs, 3, stride=1, padding='SAME')
net = subsample(net, factor=stride)

whereas

net = slim.conv2d(inputs, num_outputs, 3, stride=stride, padding='SAME')

is different when the input's height or width is even, which is why we add the current function.

The difference is basically in spatial size that you get after downsampling in both cases.

Upvotes: 1

Related Questions