Reputation: 6290
I'm using the Conv2D
method of Keras. In the documentation it is written that
padding: one of "valid" or "same" (case-insensitive). Note that "same" is slightly inconsistent across backends with strides != 1, as described here
As input I have images of size (64,80,1) and I'm using kernel of size 3x3. Does that mean that the padding is wrong when using Conv2D(32, 3, strides=2, padding='same')(input)
?
How can I fix it using ZeroPadding2D?
Upvotes: 0
Views: 935
Reputation: 6002
Based on your comment and seeing that you defined a stride of 2, I believe what you want to achieve is an output size that's exactly half of the input size, i.e. output_shape == (32, 40, 32)
(the second 32 is the features).
In that case, just call model.summary()
on the final model and you will see if that is the case or not.
If it is, there's nothing else to do.
If it's bigger than you want, you can add a Cropping2D
layer to cut off pixels from the borders of the image.
If it's smaller than you want, you can add a ZeroPadding2D
layer to add zero-pixels to the borders of the image.
The syntax to create these layers is
Cropping2D(cropping=((a, b), (c, d)))
ZeroPadding2D(padding=((a, b), (c, d)))
a
: number of rows you want to add/cut off to/from the topb
: number of rows you want to add/cut off to/from the bottomc
: number of columns you want to add/cut off to/from the leftd
: number of columns you want to add/cut off to/from the rightNote however, that there is no strict technical need to always perfectly half the size with each convolution layer. Your model might work well without any padding or cropping. You will have to experiment with it in order to find out.
Upvotes: 1