Reputation: 219
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torchvision
import torchvision.transforms as transforms
train_set = torchvision.datasets.MNIST(root = './data/MNIST',train = True,download = True,\transform = transfroms.Compose([transfroms.ToTensor()])
print(len(train_set))
# 60000
train_loader = torch.utils.data.DataLoader(train_set, batch_size=100)
print(len(train_loader))
# 600
It seems like because of the batch_size, length of train_loader decreased.
I think there are 100 tensors and one classification in a batch. I just want to see the elements or shape of it. How can I do? Also,
### Model Omitted ###
model = ConvNet().to(device)
criterion = nn.CrossEntropyLoss().to(device)
optimizer = torch.optim.Adam(model.parameters(), lr = learning_rate)
for epoch in range(5):
avg_cost = 0
for data, target in train_loader:
data = data.to(device)
target = target.to(device)
optimizer.zero_grad()
hypothesis = model(data)
cost = criterion(hypothesis, target)
cost.backward()
optimizer.step()
avg_cost += cost / len(train_loader)
print('[Epoch: {:>4}] cost = {:>.9}'.format(epoch + 1, avg_cost))
I think the training per epoch trains with 60,000 tensors right? Then I think the avg_cost should be divided by 60,000, not 600(which is len(train_loader))... Am I wrong with it?
Upvotes: 1
Views: 3049
Reputation: 1656
You can get one batch of train data from trainloader
using the code below and you can easily check it's shape. I hope this may help to get what you want.
batch= iter(trainloader)
images, labels = batch.next()
print(images.shape)
# torch.Size([num_samples, in_channels, H, W])
print(labels.shape)
Upvotes: 2