Reputation: 67
Can someone please tell me that why the size of dense layer and the output layer is 256 and 10 respectively?
input = 1x28x28
conv2d1 (28-(5-1))=24 -> 32x24x24
maxpool1 32x12x12
conv2d2 (12-(3-1))=10 -> 32x10x10
maxpool2 32x5x5
dense 256
output 10
Upvotes: 0
Views: 2894
Reputation: 1
The size is dependent on the number of units we write while coding up the model. For eg.
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(32, (3,3), activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(32, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
Here in the dense layer's arguments we have mentioned the number of units to be 128, so the size of the dense layer is 128.
Same for the last layer, it's 10. the reason for it being 10 is because we have 10 different classifications to do. (like classifying a number between 0-9 using a set of images)
Upvotes: 0
Reputation: 1802
Convolution layers are different from Fully Connected layers. For fully connected, you reshape the vector to one single dimension and apply matrix multiplication with fc layer weights (W*x+B).
You should clearly read and understand concepts here (best tutorial to understand how convnets works) : http://cs231n.github.io/convolutional-networks/#conv
For Dense Layer:
In your case, first dense layer has size of weights [32*5*5,256]. Reshape the output of pool layer to one vector and feed it through dense layers. Output of first dense layer is 256 dim vector - feed it through second FC layer (weights_size = [256,10]) to get 10 dim vector
All the details of Conv, Pool, Relu, fully-connected layers and calculation of output sizes of each layer are clearly explained in the above link.
Please go through it. I hope that helps.
Upvotes: 1