Ali Sharifi B.
Ali Sharifi B.

Reputation: 595

Why do we have normally more than one fully connected layers in the late steps of the CNNs?

As I noticed, in many popular architectures of the convolutional neural networks (e.g. AlexNet), people use more than one fully connected layers with almost the same dimension to gather the responses to previously detected features in the early layers.

Why do not we use just one FC for that? Why this hierarchical arrangement of the fully connected layers is possibly more useful?

enter image description here

Upvotes: 7

Views: 4072

Answers (2)

SpinyNormam
SpinyNormam

Reputation: 183

Because there are some functions, such as XOR, that can't be modeled by a single layer. In this type of architecture the convolutional layers are computing local features and the fully-connected output layer(s) are then combining these local features to derive the final outputs.. So, you can consider the fully-connected layers as a semi-independent mapping of features to outputs, and if this is a complex mapping then you may need the expressive power of multiple layers.

Upvotes: 3

yura
yura

Reputation: 14645

Actually its no longer popular/normal. 2015+ networks(such as Resnet, Inception 4) uses Global average pooling(GAP) as a last layer + softmax, which gives same performance and mach smaller model. Last 2 layers in VGG16 is about 80% of all parameters in network. But to answer you question its common to use 2 layer MLP for classification and consider the rest of network to be feature generation. 1 layer would be normal logistic regression with global minimum and simple properties, 2 layers give some usefulness to have non linearity and usage of SGD.

Upvotes: 1

Related Questions