Reputation: 597
I'm new to deep learning and Pytorch, but I hope someone can help me out with this. My dataset contains images from different sizes. I'm trying to create a simple neural network that can classify images. However, I'm getting mismatch errors.
Neural network
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 32, 3)
self.conv2 = nn.Conv2d(32, 32, 3)
self.fc1 = nn.Linear(32 * 3 * 3, 200)
self.fc2 = nn.Linear(200, 120)
def forward(self, x):
x = F.relu(self.conv1(x))
x = F.relu(self.conv2(x))
x = F.relu(self.fc1(x))
x = self.fc2(x)
return x
net = Net()
My first convolution layer has 1 input channel, because I transform the images to grayscale images. 32 output channels was an arbitrary decision. The final fully-connected layer has 120 output channels, because there are 120 different classes.
Determine transformations and assign training set and validation set
transform = transforms.Compose(
[transforms.Grayscale(1),
transforms.RandomCrop((32,32)),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
data_dir = 'dataset'
full_dataset = datasets.ImageFolder(os.path.join(data_dir, 'train'), transform = transform)
train_size = int(0.8 * len(full_dataset))
val_size = len(full_dataset) - train_size
trainset, valset = torch.utils.data.random_split(full_dataset, [train_size, val_size])
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
shuffle=True, num_workers=2)
valloader = torch.utils.data.DataLoader(valset, batch_size=4,
shuffle=False, num_workers=2)
classes = full_dataset.classes
I transform the images to grayscale, because they are gray anyway. I crop the images to 32, because the images have different sizes and I figured that they must all be the same size when putting it through the neural network. Everything is working fine so far.
Train neural network
for epoch in range(2): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 2000 == 1999: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
print('Finished Training')
When running this last piece of code, I get the following error: size mismatch, m1: [3584 x 28], m2: [288 x 200] at /Users/soumith/miniconda2/conda-bld/pytorch_1532623076075/work/aten/src/TH/generic/THTensorMath.cpp:2070
when the following line is being executed: outputs = net(inputs)
My code is a variation of the code provided in this Pytorch tutorial. Can someone tell me what I'm doing wrong?
UPDATE
I updated the neural network class to this:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# 1 input image channel, 6 output channels, 5x5 square convolution
# kernel
self.conv1 = nn.Conv2d(1, 6, 5)
self.conv2 = nn.Conv2d(6, 16, 5)
# an affine operation: y = Wx + b
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
# Max pooling over a (2, 2) window
x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
# If the size is a square you can only specify a single number
x = F.max_pool2d(F.relu(self.conv2(x)), 2)
x = x.view(-1, self.num_flat_features(x))
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
def num_flat_features(self, x):
size = x.size()[1:] # all dimensions except the batch dimension
num_features = 1
for s in size:
num_features *= s
return num_features
net = Net()
But now I get an error at loss = criterion(outputs, labels)
:
Assertion cur_target >= 0 && cur_target < n_classes' failed. at /Users/soumith/miniconda2/conda-bld/pytorch_1532623076075/work/aten/src/THNN/generic/ClassNLLCriterion.c:93
Upvotes: 0
Views: 152
Reputation: 941
In your first configuration, you have configured self.fc1
incorrectly. You need the input to be of dimensions 32 * 28 * 28
instead of 32 * 3 * 3
as your images are 32 * 32
and kernel and stride are 3 and 1 respectively. See this video for a simpler explanation. Try adjusting your second configuration yourself now, if you can't, comment below.
Upvotes: 2