k_p
k_p

Reputation: 313

ValueError: Expected input batch_size (24) to match target batch_size (8)

Got many links to solve this read different stackoverflow answer related to this but not able to figure it out . My image size is torch.Size([8, 3, 16, 16]). My architechture is as below

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        # linear layer (784 -> 1 hidden node)
        self.fc1 = nn.Linear(16 * 16, 768)
        self.fc2 = nn.Linear(768, 64)
        self.fc3 = nn.Linear(64, 10)
        self.dropout = nn.Dropout(p=.5)

    def forward(self, x):
        # flatten image input
        x = x.view(-1, 16 * 16)
        # add hidden layer, with relu activation function
        x = self.dropout(F.relu(self.fc1(x)))
        x = self.dropout(F.relu(self.fc2(x)))
        x = F.log_softmax(self.fc3(x), dim=1)
        
        return x

# specify loss function
criterion = nn.NLLLoss()

# specify optimizer
optimizer = torch.optim.Adam(model.parameters(), lr=.003)

# number of epochs to train the model
n_epochs = 30  # suggest training between 20-50 epochs

model.train() # prep model for training

for epoch in range(n_epochs):
    # monitor training loss
    train_loss = 0.0
    
    ###################
    # train the model #
    ###################
    for data, target in trainloader:
        # clear the gradients of all optimized variables
        optimizer.zero_grad()
        # forward pass: compute predicted outputs by passing inputs to the model
        output = model(data)
        # calculate the loss
        loss = criterion(output, target)
        # backward pass: compute gradient of the loss with respect to model parameters
        loss.backward()
        # perform a single optimization step (parameter update)
        optimizer.step()
        # update running training loss
        train_loss += loss.item()*data.size(0)
        
    # print training statistics 
    # calculate average loss over an epoch
    train_loss = train_loss/len(trainloader.dataset)

    print('Epoch: {} \tTraining Loss: {:.6f}'.format(
        epoch+1, 
        train_loss
        ))

i am getting value error as

ValueError: Expected input batch_size (24) to match target batch_size (8).

how to fix it . My batch size is 8 and input image size is (16*16).And i have 10 class classification here .

Upvotes: 1

Views: 4610

Answers (1)

Ivan
Ivan

Reputation: 40648

Your input images have 3 channels, therefore your input feature size is 16*16*3, not 16*16. Currently, you consider each channel as separate instances, leading to a classifier output - after x.view(-1, 16*16) flattening - of (24, 16*16). Clearly, the batch size doesn't match because it is supposed to be 8, not 8*3 = 24.

You could either:

  • Switch to a CNN to handle multi-channel inputs (here 3 channels).
  • Use a self.fc1 with 16*16*3 input features.
  • If the input is RGB, maybe even convert to 1-channel grayscale map.

Upvotes: 1

Related Questions