Reputation: 863
I may be missing something simple here, but when I change the number of strides in convolution layers, I'm not getting a corresponding change in the number of parameters to be fit. Consider these:
from keras import layers as L
x=L.Conv2D(filters=32, kernel_size=(3,3), strides=(1,1),padding='valid')(input)
y=L.Conv2D(filters=32, kernel_size=(3,3), strides=(2,2),padding='valid')(input)
z=L.Conv2D(filters=32, kernel_size=(3,3), strides=(3,3),padding='valid')(input)
I thought that strides of (3,3) would have 3-times fewer filters being laid down in each dimension and therefore a correspondingly-smaller number of parameters to be fit. And yet, this isn't the case.
If my input layer has size (none,63,143,32) -- I'm feeding it a squeezed down output of a conv3d -- then the number of parameters of the convolutions is always 9248, regardless of stride. Sooo...what am I missing?
Upvotes: 0
Views: 1089
Reputation: 86600
Convolution filters never depend on the size of the image, the padding, the strides, etc.
They depend only on kernel_size
and filters
.
Their shape is: (kernel_size[0], kernel_size[1], input_filters, output_filters)
I suggest you read this page that explains a lot about convolutions with sliding images, although they don't represent the input channels in the images.
Upvotes: 3