Reputation: 415
Has anyone a clue what the error here could be? I already looked at other StackOverflow threads or PyTorch forum threads, but I didn't find anything 😕
My Dataset is from https://github.com/skyatmoon/CHoiCe-Dataset. For Labels, I use the name of the directories in which the images are.
If you need more code/information, don't hesitate to ask.
Train Method
def train():
criterion = nn.MSELoss()
optimizer = optim.SGD(model.parameters(), lr=0.3, momentum=0.9)
for epoch in range(3000):
running_loss = 0
for images, labels in dataloader:
optimizer.zero_grad()
output = model(images)
loss = criterion(output, labels.view(1, -1))
loss.backward()
optimizer.step()
running_loss += loss.item()
Model
model = nn.Sequential(
nn.Linear(28, 16),
nn.Sigmoid(),
nn.Linear(16, 16),
nn.Sigmoid(),
nn.Linear(16, 61)
)
DataLoader
dataloader = DataLoader(
dataset=dataset,
batch_size=64,
shuffle=True,
)
Upvotes: 0
Views: 1379
Reputation: 126
I think you should also share the code where you create the dataloader as well, it looks to vague as it is here. However, it seems like the output dimension of the model (..., 61) does not match the dimension of the labels (..., 64). You should check the number of labels that you create from repository names. Also you seem to be giving 3D input into dense neural networks (nn.Linear layers) which is not a good idea for 3D data like images. You can use Convolutional layers instead.
You should also check a similar question asked here.
Upvotes: 2