Reputation: 217
I have the Keras and Pytorch code for the same neural network. Some of the lines are switched around between the two. I am wondering why for the Pytorch version max pooling comes before batch normalization and reel activation. In Keras it comes after those two lines. And for flattening, I'm also confused on how Pytorch used 64 * 7 * 7 (where do the 7s come from?).
Here's the Keras version of the Shallow net Alex net:
def shallownet(nb_classes):
global img_size
model = Sequential()
model.add(Conv2D(64, (5, 5), input_shape=img_size, data_format='channels_first'))
model.add(BatchNormalization(axis=1))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(3,3), strides=(2,2), padding='same', data_format='channels_first'))
model.add(Conv2D(64, (5, 5), padding='same', data_format='channels_first'))
model.add(BatchNormalization(axis=1))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(3,3), strides=(2,2), padding='same', data_format='channels_first'))
model.add(Flatten())
model.add(Dense(384))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(192))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(nb_classes, activation='softmax'))
return model
and the Pytorch version:
class AlexNet(nn.Module):
def __init__(self, num_classes=10):
super(AlexNet, self).__init__()
self.features = nn.Sequential(
nn.Conv2d(3, 64, kernel_size=5, padding=2,
bias=False),
nn.MaxPool2d(kernel_size=3, stride=2),
nn.BatchNorm2d(64),
nn.ReLU(inplace=True),
nn.Conv2d(64, 64, kernel_size=5, padding=2, bias=False),
nn.MaxPool2d(kernel_size=3, stride=2),
nn.BatchNorm2d(64),
nn.ReLU(inplace=True),
)
self.classifier = nn.Sequential(
nn.Linear(64 * 7 * 7, 384, bias=False),
nn.BatchNorm1d(384),
nn.ReLU(inplace=True),
nn.Dropout(0.5),
nn.Linear(384, 192, bias=False),
nn.BatchNorm1d(192),
nn.ReLU(inplace=True),
nn.Dropout(0.5),
nn.Linear(192, num_classes)
)
self.regime = {
0: {'optimizer': 'SGD', 'lr': 1e-3,
'weight_decay': 5e-4, 'momentum': 0.9},
60: {'lr': 1e-2},
120: {'lr': 1e-3},
180: {'lr': 1e-4}
}
def forward(self, x):
x = self.features(x)
x = x.view(-1, 64 * 7 * 7)
x = self.classifier(x)
return F.log_softmax(x)
def cifar10_shallow(**kwargs):
num_classes = getattr(kwargs, 'num_classes', 10)
return AlexNet(num_classes)
def cifar100_shallow(**kwargs):
num_classes = getattr(kwargs, 'num_classes', 100)
return AlexNet(num_classes)
Upvotes: 0
Views: 568
Reputation: 310
Max pooling downsamples the data by picking the maximum of a certain pool of values. Comparisons between data will not be affected by batch normalization and ReLU activation because both are one-to-one monotonically increasing functions.
relu(x) = max(0, x)
bn(x) = (x - mu) / sigma
Therefore, it doesn't really matter if max pool comes after or before those two layers (it might be more efficient to have it before).
Regarding the flattening, I believe the 7s are the spatial dimensions of the layer before Flatten()
i.e. H = W = 7
. Thus, the total number of values is equal to the spatial dimensions times the channel size which is 64 * 7 * 7
.
Upvotes: 1