lecose
lecose

Reputation: 71

Error when training CNN: "RuntimeError: The size of tensor a (10) must match the size of tensor b (64) at non-singleton dimension 1"

I'm new to Pytorch and I'm trying to implemente a simple CNN to recognize MNIST images.

I'm training the network using MSE Loss as loss function and SGD as optimizer. When I get to the training it gives me the following

warning: " UserWarning: Using a target size (torch.Size([64])) that is different to the input size (torch.Size([64, 10])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size."

And then I get the following

error: "RuntimeError: The size of tensor a (10) must match the size of tensor b
       (64) at non-singleton dimension 1".

I've tried to solve it using some solutions I've found in other questions but nothing seems to work. Here's the code of how I load the dataset:

transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,),(0.5,))])

trainset = torchvision.datasets.MNIST(root='./data', train = True, transform = transform, download = True)
trainloader = torch.utils.data.DataLoader(trainset, batch_size = 64, shuffle = True)

testset = torchvision.datasets.MNIST(root='./data', train = False, transform = transform, download = True)
testloader = torch.utils.data.DataLoader(testset, batch_size = 64, shuffle = False)

The code to define my network:

    class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        #Convolutional layers
        self.conv1 = nn.Conv2d(1, 6, 5)
        self.conv2 = nn.Conv2d(6, 12, 5)
        #Fully connected layers
        self.fc1 = nn.Linear(12*4*4, 120)
        self.fc2 = nn.Linear(120, 60)
        self.out = nn.Linear(60,10)

    def forward(self, x):
        x = F.max_pool2d(F.relu(self.conv1(x)), (2,2))
        x = F.max_pool2d(F.relu(self.conv2(x)), (2,2))
        x = x.reshape(-1, 12*4*4)
        x = F.relu(self.fc1(x))         
        x = F.relu(self.fc2(x))
        x = self.out(x)
        return x

And this is the training:

net = Net()
print(net)

criterion = nn.MSELoss() 
optimizer = optim.SGD(net.parameters(), lr=0.001)
epochs = 3

for epoch in range(epochs):
    running_loss = 0;
    for images, labels in trainloader:
        optimizer.zero_grad()
        output = net(images)
        loss = criterion(output, labels)
        loss.backward()
        optimizer.step()

        running_loss += loss.item()
    else:
        print(f"Training loss: {running_loss/len(trainloader)}")

print('Finished training')

Thank you!

Upvotes: 1

Views: 1955

Answers (2)

Wamiq Raza AI
Wamiq Raza AI

Reputation: 39

I agree with @AshwinNair advise and I did change in for loop in train and eval section as below it work for me.

for i, (img, label) in enumerate(dataloader):

  img = img.to(device)

  label = label.to(device)`

Upvotes: 0

ashnair1
ashnair1

Reputation: 365

The loss you're using (nn.MSELoss) is incorrect for this problem. You should use nn.CrossEntropyLoss.

Mean Squared Loss measures the mean squared error between input x and target y. Here the input and target naturally should be of the same shape.

Cross Entropy Loss computes the probability over the classes for each image. The output would be a matrix N x C and target would be a vector of size N. (N = batch size, C = number of classes)

Since your aim is to classify the image, this is what you'll want to use.

In your case, your network output will be a matrix of size 64 x 10 and target is a vector of size 64. Each row of the output matrix (after applying the softmax function) indicates the probability of that class after which the Cross entropy loss is computed. Pytorch's nn.CrossEntropyLoss combines both the softmax operation with the loss computation.

You can refer the documentation here for more info on how Pytorch computes losses.

Upvotes: 5

Related Questions