Urvi Soni
Urvi Soni

Reputation: 314

RuntimeError: size mismatch m1: [a x b], m2: [c x d]

Can anyone help me in this.? I am getting below error. I use Google Colab. How to Solve this error.?

size mismatch, m1: [64 x 100], m2: [784 x 128] at /pytorch/aten/src/TH/generic/THTensorMath.cpp:2070

Below Code I am trying to Run.

    import torch
    from torch import nn
    import torch.nn.functional as F
    from torchvision import datasets, transforms

    # Define a transform to normalize the data
    transform = 
    transforms.Compose([transforms.CenterCrop(10),transforms.ToTensor(),])
    # Download the load the training data
    trainset = datasets.MNIST('~/.pytorch/MNIST_data/', download=True, 
    train=True, transform=transform)
    trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, 
    shuffle=True)

    # Build a feed-forward network
    model = nn.Sequential(nn.Linear(784, 128),nn.ReLU(),nn.Linear(128, 
    64),nn.ReLU(),nn.Linear(64, 10))

    # Define the loss
    criterion = nn.CrossEntropyLoss()

   # Get our data
   images, labels = next(iter(trainloader))
   # Faltten images
   images = images.view(images.shape[0], -1)

   # Forward pass, get our logits
   logits = model(images)
   # Calculate the loss with the logits and the labels
   loss = criterion(logits, labels)
   print(loss)

Upvotes: 10

Views: 10795

Answers (2)

Shai
Shai

Reputation: 114926

You have a size mismatch!
Your first layer of model expects a 784-dim input (I assume you got this value by 28x28=784, the size of MNIST digits).
However, your trainset applies transforms.CenterCrop(10) - that is it crops a 10x10 region from the center of the image, and thus your input dimension is actually 100.

To summarize:

  • Your first layer: nn.Linear(784, 128) expects a 784-dim input and outputs a 128-dim hidden feature vector (per input). This layer's weight matrix is thus [784 x 128] ("m2" in your error message).
  • Your input is center cropped to 10x10 pixels (total 100-dim), and you have batch_size=64 such images at each batch, total [64 x 100] input size ("m1" in your error message).
  • You cannot compute a dot-product between matrices with mismatching sizes: 100 != 784, therefore pytorch gives you this error.

Upvotes: 6

prosti
prosti

Reputation: 46449

All you have to care is b=c and you are done:

m1: [a x b], m2: [c x d]

m1 is [a x b] which is [batch size x in features]

m2 is [c x d] which is [in features x out features]

Upvotes: 8

Related Questions