Reputation: 21
I wanted to plot the loss of my CNN, so created lists before starting to train with
test_loss_history = [] train_loss_history = []
and added the values after every epoch with train_loss_history.append(train_loss) test_loss_history.append(test_loss)
. I had done the same the same with the accuracy before, but when I add these lines for the loss, the accuracy drops around 40%. Does storing values affect the training process in any way?
I am using Google Colab and train a ResNet18 with a subset of MNIST.
My code looks like that:
train_loss_history = []
train_acc_history = []
for epoch in range(epoch_resume, opt.max_epochs):
...
for i, data in enumerate(trainloader, 0):
train_loss += imgs.size(0)*criterion(logits, labels).data
...
train_loss /= len(trainset)
train_acc_history.append(train_acc)
train_loss_history.append(train_loss)
Upvotes: 1
Views: 416
Reputation: 3727
train_loss += imgs.size(0)*criterion(logits, labels).data
I am assuming that train_loss
is what you are backpropogating through (ie your code is calling train_loss.backward()
. When saving the values in the list (for plotting later), use the .item()
function. ie
train_loss_history.append(train_loss.item())
Most likely, you are storing the reference to the loss (and eventually you will run out of memory). Calling .item
gives you the scalar value from the loss
tensor and does not carry around the tensor.
Beyond your immediate question, you should not be using .data
attribute. Are you on a very old version of PyTorch? (may 0.3 or lower)? If yes, you should consider upgrading.
You can find some more info on .item()
, .data
and upgrading PyTorch here. Its an old blog put it seems to apply to your case.
Upvotes: 0
Reputation: 320
You can just use Tensorboard to plot loss and other metrics that you want to keep track of. Just you tensorboard default callback.
No need to saves metrics when tensorboard got your back
Upvotes: 1