machinery
machinery

Reputation: 6290

Padding in Conv2D gives wrong result?

I'm using the Conv2D method of Keras. In the documentation it is written that

padding: one of "valid" or "same" (case-insensitive). Note that "same" is slightly inconsistent across backends with strides != 1, as described here

As input I have images of size (64,80,1) and I'm using kernel of size 3x3. Does that mean that the padding is wrong when using Conv2D(32, 3, strides=2, padding='same')(input)?

How can I fix it using ZeroPadding2D?

Upvotes: 0

Views: 935

Answers (1)

sebrockm
sebrockm

Reputation: 6002

Based on your comment and seeing that you defined a stride of 2, I believe what you want to achieve is an output size that's exactly half of the input size, i.e. output_shape == (32, 40, 32) (the second 32 is the features).

In that case, just call model.summary() on the final model and you will see if that is the case or not.

If it is, there's nothing else to do. If it's bigger than you want, you can add a Cropping2D layer to cut off pixels from the borders of the image. If it's smaller than you want, you can add a ZeroPadding2D layer to add zero-pixels to the borders of the image.

The syntax to create these layers is

Cropping2D(cropping=((a, b), (c, d)))
ZeroPadding2D(padding=((a, b), (c, d)))
  • a: number of rows you want to add/cut off to/from the top
  • b: number of rows you want to add/cut off to/from the bottom
  • c: number of columns you want to add/cut off to/from the left
  • d: number of columns you want to add/cut off to/from the right

Note however, that there is no strict technical need to always perfectly half the size with each convolution layer. Your model might work well without any padding or cropping. You will have to experiment with it in order to find out.

Upvotes: 1

Related Questions